=== RUN TestAddons/serial/Volcano
addons_test.go:819: volcano-controller stabilized in 56.063747ms
addons_test.go:803: volcano-scheduler stabilized in 56.133338ms
addons_test.go:811: volcano-admission stabilized in 56.17912ms
addons_test.go:825: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-65r4d" [6e5bd60a-88e3-423c-921e-e94e2c7d7f4c] Running
addons_test.go:825: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003583872s
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-hpn22" [1680d8ab-4a38-4f57-ab7a-a7f00ae60556] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003754545s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-qbzrm" [546634ef-5828-4b48-b062-719b32cced22] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005387811s
addons_test.go:838: (dbg) Run: kubectl --context addons-246349 delete -n volcano-system job volcano-admission-init
addons_test.go:844: (dbg) Run: kubectl --context addons-246349 create -f testdata/vcjob.yaml
addons_test.go:852: (dbg) Run: kubectl --context addons-246349 get vcjob -n my-volcano
addons_test.go:870: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a75b011b-36df-45c9-9e92-f10b1c6f3c11] Pending
helpers_test.go:344: "test-job-nginx-0" [a75b011b-36df-45c9-9e92-f10b1c6f3c11] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:870: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:870: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-246349 -n addons-246349
addons_test.go:870: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-10-08 18:07:22.342755242 +0000 UTC m=+377.788852296
addons_test.go:870: (dbg) Run: kubectl --context addons-246349 describe po test-job-nginx-0 -n my-volcano
addons_test.go:870: (dbg) kubectl --context addons-246349 describe po test-job-nginx-0 -n my-volcano:
Name: test-job-nginx-0
Namespace: my-volcano
Priority: 0
Service Account: default
Node: <none>
Labels: volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations: scheduling.k8s.io/group-name: test-job-e0b0a74f-29dc-4939-ac64-a844d746ab21
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status: Pending
IP:
IPs: <none>
Controlled By: Job/test-job
Containers:
nginx:
Image: nginx:latest
Port: <none>
Host Port: <none>
Command:
sleep
10m
Limits:
cpu: 1
Requests:
cpu: 1
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m27fg (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-m27fg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m59s volcano 0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:870: (dbg) Run: kubectl --context addons-246349 logs test-job-nginx-0 -n my-volcano
addons_test.go:870: (dbg) kubectl --context addons-246349 logs test-job-nginx-0 -n my-volcano:
addons_test.go:871: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-246349
helpers_test.go:235: (dbg) docker inspect addons-246349:
-- stdout --
[
{
"Id": "b9855b2e0c72fce7bc52d302cdffce6e9a508a5f6676cdc9a7ec32f4295ec129",
"Created": "2024-10-08T18:01:53.50466759Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 289792,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-10-08T18:01:53.662285803Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:e5ca9b83e048da5ecbd9864892b13b9f06d661ec5eae41590141157c6fe62bf7",
"ResolvConfPath": "/var/lib/docker/containers/b9855b2e0c72fce7bc52d302cdffce6e9a508a5f6676cdc9a7ec32f4295ec129/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/b9855b2e0c72fce7bc52d302cdffce6e9a508a5f6676cdc9a7ec32f4295ec129/hostname",
"HostsPath": "/var/lib/docker/containers/b9855b2e0c72fce7bc52d302cdffce6e9a508a5f6676cdc9a7ec32f4295ec129/hosts",
"LogPath": "/var/lib/docker/containers/b9855b2e0c72fce7bc52d302cdffce6e9a508a5f6676cdc9a7ec32f4295ec129/b9855b2e0c72fce7bc52d302cdffce6e9a508a5f6676cdc9a7ec32f4295ec129-json.log",
"Name": "/addons-246349",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-246349:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-246349",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/66e83a199c835e1ea1618fc86a0613fb863b49a68940d27301f39f12aa13878a-init/diff:/var/lib/docker/overlay2/211ed394d64374fe90b3e50a914ebed5f9b85a2e1d8650161b42163931148dcb/diff",
"MergedDir": "/var/lib/docker/overlay2/66e83a199c835e1ea1618fc86a0613fb863b49a68940d27301f39f12aa13878a/merged",
"UpperDir": "/var/lib/docker/overlay2/66e83a199c835e1ea1618fc86a0613fb863b49a68940d27301f39f12aa13878a/diff",
"WorkDir": "/var/lib/docker/overlay2/66e83a199c835e1ea1618fc86a0613fb863b49a68940d27301f39f12aa13878a/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "addons-246349",
"Source": "/var/lib/docker/volumes/addons-246349/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "addons-246349",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-246349",
"name.minikube.sigs.k8s.io": "addons-246349",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "ba58cd093c80c58e5ed6645deebd3075792f73b3bbef2695519383d02ddbafbb",
"SandboxKey": "/var/run/docker/netns/ba58cd093c80",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33133"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33134"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33137"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33135"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33136"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-246349": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "2a180a99cf8d2485671b427907cc072dabd588376eba9049e3d11f70ac4770c9",
"EndpointID": "f78544cafbd2ce86c1d7c806a6029264f42fbaaee48bda0e72945bf9cca700c8",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-246349",
"b9855b2e0c72"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p addons-246349 -n addons-246349
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p addons-246349 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-246349 logs -n 25: (1.564869787s)
helpers_test.go:252: TestAddons/serial/Volcano logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-945652 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | |
| | -p download-only-945652 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
| delete | -p download-only-945652 | download-only-945652 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
| start | -o=json --download-only | download-only-063477 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | |
| | -p download-only-063477 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
| delete | -p download-only-063477 | download-only-063477 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
| delete | -p download-only-945652 | download-only-945652 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
| delete | -p download-only-063477 | download-only-063477 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
| start | --download-only -p | download-docker-419107 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | |
| | download-docker-419107 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p download-docker-419107 | download-docker-419107 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
| start | --download-only -p | binary-mirror-075119 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | |
| | binary-mirror-075119 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:34241 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p binary-mirror-075119 | binary-mirror-075119 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
| addons | disable dashboard -p | addons-246349 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | |
| | addons-246349 | | | | | |
| addons | enable dashboard -p | addons-246349 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | |
| | addons-246349 | | | | | |
| start | -p addons-246349 --wait=true | addons-246349 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:04 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/10/08 18:01:29
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.23.1 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1008 18:01:29.278418 289308 out.go:345] Setting OutFile to fd 1 ...
I1008 18:01:29.278619 289308 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 18:01:29.278647 289308 out.go:358] Setting ErrFile to fd 2...
I1008 18:01:29.278667 289308 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 18:01:29.278949 289308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
I1008 18:01:29.279465 289308 out.go:352] Setting JSON to false
I1008 18:01:29.280386 289308 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6238,"bootTime":1728404252,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I1008 18:01:29.280487 289308 start.go:139] virtualization:
I1008 18:01:29.282193 289308 out.go:177] * [addons-246349] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I1008 18:01:29.283475 289308 out.go:177] - MINIKUBE_LOCATION=19774
I1008 18:01:29.283554 289308 notify.go:220] Checking for updates...
I1008 18:01:29.285740 289308 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1008 18:01:29.286922 289308 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig
I1008 18:01:29.287950 289308 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube
I1008 18:01:29.289102 289308 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I1008 18:01:29.290141 289308 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1008 18:01:29.291368 289308 driver.go:394] Setting default libvirt URI to qemu:///system
I1008 18:01:29.311273 289308 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
I1008 18:01:29.311410 289308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1008 18:01:29.380635 289308 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-08 18:01:29.370884062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1008 18:01:29.380761 289308 docker.go:318] overlay module found
I1008 18:01:29.382798 289308 out.go:177] * Using the docker driver based on user configuration
I1008 18:01:29.384007 289308 start.go:297] selected driver: docker
I1008 18:01:29.384034 289308 start.go:901] validating driver "docker" against <nil>
I1008 18:01:29.384047 289308 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1008 18:01:29.384711 289308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1008 18:01:29.434459 289308 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-08 18:01:29.422164892 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1008 18:01:29.434663 289308 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I1008 18:01:29.434901 289308 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1008 18:01:29.436093 289308 out.go:177] * Using Docker driver with root privileges
I1008 18:01:29.437058 289308 cni.go:84] Creating CNI manager for ""
I1008 18:01:29.437125 289308 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1008 18:01:29.437136 289308 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I1008 18:01:29.437209 289308 start.go:340] cluster config:
{Name:addons-246349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-246349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1008 18:01:29.438480 289308 out.go:177] * Starting "addons-246349" primary control-plane node in "addons-246349" cluster
I1008 18:01:29.439600 289308 cache.go:121] Beginning downloading kic base image for docker with containerd
I1008 18:01:29.440837 289308 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
I1008 18:01:29.441867 289308 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I1008 18:01:29.441921 289308 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
I1008 18:01:29.441933 289308 cache.go:56] Caching tarball of preloaded images
I1008 18:01:29.441955 289308 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
I1008 18:01:29.442016 289308 preload.go:172] Found /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1008 18:01:29.442027 289308 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
I1008 18:01:29.442368 289308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/config.json ...
I1008 18:01:29.442437 289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/config.json: {Name:mk94e4f0080f368eed201b4abc12c0f546003cbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 18:01:29.456568 289308 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
I1008 18:01:29.456702 289308 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
I1008 18:01:29.456728 289308 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
I1008 18:01:29.456732 289308 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
I1008 18:01:29.456740 289308 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
I1008 18:01:29.456745 289308 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from local cache
I1008 18:01:46.591054 289308 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from cached tarball
I1008 18:01:46.591094 289308 cache.go:194] Successfully downloaded all kic artifacts
I1008 18:01:46.591134 289308 start.go:360] acquireMachinesLock for addons-246349: {Name:mke529fb19b7ca87311bc65a32cc4a27a559389d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1008 18:01:46.591262 289308 start.go:364] duration metric: took 104.937µs to acquireMachinesLock for "addons-246349"
I1008 18:01:46.591294 289308 start.go:93] Provisioning new machine with config: &{Name:addons-246349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-246349 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1008 18:01:46.591382 289308 start.go:125] createHost starting for "" (driver="docker")
I1008 18:01:46.594434 289308 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I1008 18:01:46.594682 289308 start.go:159] libmachine.API.Create for "addons-246349" (driver="docker")
I1008 18:01:46.594717 289308 client.go:168] LocalClient.Create starting
I1008 18:01:46.594831 289308 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem
I1008 18:01:46.911568 289308 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/cert.pem
I1008 18:01:47.674158 289308 cli_runner.go:164] Run: docker network inspect addons-246349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1008 18:01:47.688663 289308 cli_runner.go:211] docker network inspect addons-246349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1008 18:01:47.688754 289308 network_create.go:284] running [docker network inspect addons-246349] to gather additional debugging logs...
I1008 18:01:47.688777 289308 cli_runner.go:164] Run: docker network inspect addons-246349
W1008 18:01:47.704166 289308 cli_runner.go:211] docker network inspect addons-246349 returned with exit code 1
I1008 18:01:47.704205 289308 network_create.go:287] error running [docker network inspect addons-246349]: docker network inspect addons-246349: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-246349 not found
I1008 18:01:47.704220 289308 network_create.go:289] output of [docker network inspect addons-246349]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-246349 not found
** /stderr **
I1008 18:01:47.704330 289308 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1008 18:01:47.720298 289308 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400187c880}
I1008 18:01:47.720348 289308 network_create.go:124] attempt to create docker network addons-246349 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1008 18:01:47.720408 289308 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-246349 addons-246349
I1008 18:01:47.787152 289308 network_create.go:108] docker network addons-246349 192.168.49.0/24 created
I1008 18:01:47.787184 289308 kic.go:121] calculated static IP "192.168.49.2" for the "addons-246349" container
I1008 18:01:47.787269 289308 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1008 18:01:47.802194 289308 cli_runner.go:164] Run: docker volume create addons-246349 --label name.minikube.sigs.k8s.io=addons-246349 --label created_by.minikube.sigs.k8s.io=true
I1008 18:01:47.818822 289308 oci.go:103] Successfully created a docker volume addons-246349
I1008 18:01:47.818922 289308 cli_runner.go:164] Run: docker run --rm --name addons-246349-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-246349 --entrypoint /usr/bin/test -v addons-246349:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib
I1008 18:01:49.397817 289308 cli_runner.go:217] Completed: docker run --rm --name addons-246349-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-246349 --entrypoint /usr/bin/test -v addons-246349:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib: (1.578853468s)
I1008 18:01:49.397847 289308 oci.go:107] Successfully prepared a docker volume addons-246349
I1008 18:01:49.397868 289308 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I1008 18:01:49.397887 289308 kic.go:194] Starting extracting preloaded images to volume ...
I1008 18:01:49.397955 289308 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-246349:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir
I1008 18:01:53.438303 289308 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-246349:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir: (4.040309943s)
I1008 18:01:53.438342 289308 kic.go:203] duration metric: took 4.040451094s to extract preloaded images to volume ...
W1008 18:01:53.438475 289308 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1008 18:01:53.438583 289308 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1008 18:01:53.490414 289308 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-246349 --name addons-246349 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-246349 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-246349 --network addons-246349 --ip 192.168.49.2 --volume addons-246349:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec
I1008 18:01:53.827521 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Running}}
I1008 18:01:53.845576 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:01:53.870112 289308 cli_runner.go:164] Run: docker exec addons-246349 stat /var/lib/dpkg/alternatives/iptables
I1008 18:01:53.952672 289308 oci.go:144] the created container "addons-246349" has a running status.
I1008 18:01:53.952699 289308 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa...
I1008 18:01:54.173506 289308 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1008 18:01:54.193593 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:01:54.221734 289308 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1008 18:01:54.221756 289308 kic_runner.go:114] Args: [docker exec --privileged addons-246349 chown docker:docker /home/docker/.ssh/authorized_keys]
I1008 18:01:54.298564 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:01:54.335312 289308 machine.go:93] provisionDockerMachine start ...
I1008 18:01:54.335406 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:01:54.365052 289308 main.go:141] libmachine: Using SSH client type: native
I1008 18:01:54.365311 289308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil> [] 0s} 127.0.0.1 33133 <nil> <nil>}
I1008 18:01:54.365327 289308 main.go:141] libmachine: About to run SSH command:
hostname
I1008 18:01:54.365985 289308 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1008 18:01:57.497335 289308 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-246349
I1008 18:01:57.497366 289308 ubuntu.go:169] provisioning hostname "addons-246349"
I1008 18:01:57.497440 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:01:57.520350 289308 main.go:141] libmachine: Using SSH client type: native
I1008 18:01:57.520617 289308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil> [] 0s} 127.0.0.1 33133 <nil> <nil>}
I1008 18:01:57.520636 289308 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-246349 && echo "addons-246349" | sudo tee /etc/hostname
I1008 18:01:57.661825 289308 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-246349
I1008 18:01:57.661909 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:01:57.680856 289308 main.go:141] libmachine: Using SSH client type: native
I1008 18:01:57.681110 289308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil> [] 0s} 127.0.0.1 33133 <nil> <nil>}
I1008 18:01:57.681132 289308 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-246349' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-246349/g' /etc/hosts;
else
echo '127.0.1.1 addons-246349' | sudo tee -a /etc/hosts;
fi
fi
I1008 18:01:57.809766 289308 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1008 18:01:57.809795 289308 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19774-283126/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-283126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-283126/.minikube}
I1008 18:01:57.809816 289308 ubuntu.go:177] setting up certificates
I1008 18:01:57.809826 289308 provision.go:84] configureAuth start
I1008 18:01:57.809887 289308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-246349
I1008 18:01:57.826322 289308 provision.go:143] copyHostCerts
I1008 18:01:57.826403 289308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-283126/.minikube/ca.pem (1078 bytes)
I1008 18:01:57.826563 289308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-283126/.minikube/cert.pem (1123 bytes)
I1008 18:01:57.826633 289308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-283126/.minikube/key.pem (1679 bytes)
I1008 18:01:57.826687 289308 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-283126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca-key.pem org=jenkins.addons-246349 san=[127.0.0.1 192.168.49.2 addons-246349 localhost minikube]
I1008 18:01:58.107470 289308 provision.go:177] copyRemoteCerts
I1008 18:01:58.107542 289308 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1008 18:01:58.107583 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:01:58.126075 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:01:58.218625 289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1008 18:01:58.243252 289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1008 18:01:58.268504 289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1008 18:01:58.293150 289308 provision.go:87] duration metric: took 483.298553ms to configureAuth
I1008 18:01:58.293177 289308 ubuntu.go:193] setting minikube options for container-runtime
I1008 18:01:58.293390 289308 config.go:182] Loaded profile config "addons-246349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1008 18:01:58.293402 289308 machine.go:96] duration metric: took 3.958071381s to provisionDockerMachine
I1008 18:01:58.293409 289308 client.go:171] duration metric: took 11.698680459s to LocalClient.Create
I1008 18:01:58.293429 289308 start.go:167] duration metric: took 11.698747441s to libmachine.API.Create "addons-246349"
I1008 18:01:58.293440 289308 start.go:293] postStartSetup for "addons-246349" (driver="docker")
I1008 18:01:58.293449 289308 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1008 18:01:58.293504 289308 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1008 18:01:58.293548 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:01:58.309976 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:01:58.402718 289308 ssh_runner.go:195] Run: cat /etc/os-release
I1008 18:01:58.406162 289308 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1008 18:01:58.406200 289308 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1008 18:01:58.406238 289308 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1008 18:01:58.406253 289308 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I1008 18:01:58.406263 289308 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-283126/.minikube/addons for local assets ...
I1008 18:01:58.406338 289308 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-283126/.minikube/files for local assets ...
I1008 18:01:58.406372 289308 start.go:296] duration metric: took 112.92593ms for postStartSetup
I1008 18:01:58.406688 289308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-246349
I1008 18:01:58.422340 289308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/config.json ...
I1008 18:01:58.422632 289308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1008 18:01:58.422682 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:01:58.439210 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:01:58.531022 289308 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1008 18:01:58.535776 289308 start.go:128] duration metric: took 11.944377164s to createHost
I1008 18:01:58.535801 289308 start.go:83] releasing machines lock for "addons-246349", held for 11.94452606s
I1008 18:01:58.535873 289308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-246349
I1008 18:01:58.552606 289308 ssh_runner.go:195] Run: cat /version.json
I1008 18:01:58.552623 289308 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1008 18:01:58.552662 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:01:58.552720 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:01:58.572193 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:01:58.586133 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:01:58.665133 289308 ssh_runner.go:195] Run: systemctl --version
I1008 18:01:58.797409 289308 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1008 18:01:58.801727 289308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1008 18:01:58.827252 289308 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1008 18:01:58.827330 289308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1008 18:01:58.857318 289308 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I1008 18:01:58.857341 289308 start.go:495] detecting cgroup driver to use...
I1008 18:01:58.857381 289308 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1008 18:01:58.857437 289308 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1008 18:01:58.870073 289308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1008 18:01:58.882083 289308 docker.go:217] disabling cri-docker service (if available) ...
I1008 18:01:58.882147 289308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1008 18:01:58.896771 289308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1008 18:01:58.911604 289308 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1008 18:01:58.999715 289308 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1008 18:01:59.093807 289308 docker.go:233] disabling docker service ...
I1008 18:01:59.093879 289308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1008 18:01:59.114299 289308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1008 18:01:59.126268 289308 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1008 18:01:59.214638 289308 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1008 18:01:59.303331 289308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1008 18:01:59.314816 289308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1008 18:01:59.330641 289308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I1008 18:01:59.340503 289308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1008 18:01:59.350162 289308 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1008 18:01:59.350276 289308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1008 18:01:59.360931 289308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1008 18:01:59.370913 289308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1008 18:01:59.380725 289308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1008 18:01:59.390312 289308 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1008 18:01:59.399480 289308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1008 18:01:59.409528 289308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1008 18:01:59.419248 289308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1008 18:01:59.429322 289308 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1008 18:01:59.437961 289308 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1008 18:01:59.446851 289308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1008 18:01:59.538332 289308 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1008 18:01:59.668537 289308 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1008 18:01:59.668693 289308 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1008 18:01:59.672173 289308 start.go:563] Will wait 60s for crictl version
I1008 18:01:59.672236 289308 ssh_runner.go:195] Run: which crictl
I1008 18:01:59.675556 289308 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1008 18:01:59.716433 289308 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.22
RuntimeApiVersion: v1
I1008 18:01:59.716518 289308 ssh_runner.go:195] Run: containerd --version
I1008 18:01:59.738929 289308 ssh_runner.go:195] Run: containerd --version
I1008 18:01:59.767370 289308 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
I1008 18:01:59.770201 289308 cli_runner.go:164] Run: docker network inspect addons-246349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1008 18:01:59.785784 289308 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1008 18:01:59.789412 289308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1008 18:01:59.800144 289308 kubeadm.go:883] updating cluster {Name:addons-246349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-246349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1008 18:01:59.800275 289308 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I1008 18:01:59.800342 289308 ssh_runner.go:195] Run: sudo crictl images --output json
I1008 18:01:59.836516 289308 containerd.go:627] all images are preloaded for containerd runtime.
I1008 18:01:59.836540 289308 containerd.go:534] Images already preloaded, skipping extraction
I1008 18:01:59.836599 289308 ssh_runner.go:195] Run: sudo crictl images --output json
I1008 18:01:59.875826 289308 containerd.go:627] all images are preloaded for containerd runtime.
I1008 18:01:59.875850 289308 cache_images.go:84] Images are preloaded, skipping loading
I1008 18:01:59.875858 289308 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
I1008 18:01:59.875951 289308 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-246349 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:addons-246349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1008 18:01:59.876020 289308 ssh_runner.go:195] Run: sudo crictl info
I1008 18:01:59.912459 289308 cni.go:84] Creating CNI manager for ""
I1008 18:01:59.912485 289308 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1008 18:01:59.912495 289308 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1008 18:01:59.912518 289308 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-246349 NodeName:addons-246349 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1008 18:01:59.912650 289308 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "addons-246349"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1008 18:01:59.912723 289308 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I1008 18:01:59.921638 289308 binaries.go:44] Found k8s binaries, skipping transfer
I1008 18:01:59.921731 289308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1008 18:01:59.930744 289308 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
I1008 18:01:59.949112 289308 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1008 18:01:59.967515 289308 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
I1008 18:01:59.987043 289308 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1008 18:01:59.990585 289308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1008 18:02:00.002324 289308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1008 18:02:00.093478 289308 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1008 18:02:00.112654 289308 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349 for IP: 192.168.49.2
I1008 18:02:00.112683 289308 certs.go:194] generating shared ca certs ...
I1008 18:02:00.112705 289308 certs.go:226] acquiring lock for ca certs: {Name:mk9b4a4bb626944e2ef6352dc46232c13e820586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 18:02:00.112861 289308 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-283126/.minikube/ca.key
I1008 18:02:01.095619 289308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-283126/.minikube/ca.crt ...
I1008 18:02:01.095656 289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/ca.crt: {Name:mk6969eb7cf1a3587be1795d424d67277866ca0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 18:02:01.095886 289308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-283126/.minikube/ca.key ...
I1008 18:02:01.095901 289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/ca.key: {Name:mk4e91d6155c29d94b5277a3c747b1852e798f11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 18:02:01.095996 289308 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.key
I1008 18:02:01.550152 289308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.crt ...
I1008 18:02:01.550184 289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.crt: {Name:mkd127f52a9e243d3bf49581033f9c43927a305f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 18:02:01.550389 289308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.key ...
I1008 18:02:01.550405 289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.key: {Name:mk5534a5a90f70d374aace592195b18ea32d220f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 18:02:01.550487 289308 certs.go:256] generating profile certs ...
I1008 18:02:01.550552 289308 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.key
I1008 18:02:01.550580 289308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt with IP's: []
I1008 18:02:02.021490 289308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt ...
I1008 18:02:02.021522 289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: {Name:mk1373c5dc4bbc33d45f7cfe069209ca7c0c5fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 18:02:02.021722 289308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.key ...
I1008 18:02:02.021737 289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.key: {Name:mkc8101c3cf4d35b1bf598206c9e6092646c5995 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 18:02:02.021823 289308 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.key.3c3158c0
I1008 18:02:02.021845 289308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.crt.3c3158c0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1008 18:02:02.296153 289308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.crt.3c3158c0 ...
I1008 18:02:02.296183 289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.crt.3c3158c0: {Name:mk7491c5231d1f7adeb0cab2720c5ac4f612baed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 18:02:02.296736 289308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.key.3c3158c0 ...
I1008 18:02:02.296755 289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.key.3c3158c0: {Name:mk5d12e162a35a1810c72cce431d4b479dc6c40d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 18:02:02.296854 289308 certs.go:381] copying /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.crt.3c3158c0 -> /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.crt
I1008 18:02:02.296936 289308 certs.go:385] copying /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.key.3c3158c0 -> /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.key
I1008 18:02:02.296992 289308 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/proxy-client.key
I1008 18:02:02.297012 289308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/proxy-client.crt with IP's: []
I1008 18:02:02.774343 289308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/proxy-client.crt ...
I1008 18:02:02.774375 289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/proxy-client.crt: {Name:mka0a3694f2abba948f1a2cff851748ae260ee68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 18:02:02.774563 289308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/proxy-client.key ...
I1008 18:02:02.774578 289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/proxy-client.key: {Name:mkac479fa4598dd9d4a98c039c1642b5c0032f12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 18:02:02.774770 289308 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca-key.pem (1675 bytes)
I1008 18:02:02.774815 289308 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem (1078 bytes)
I1008 18:02:02.774845 289308 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/cert.pem (1123 bytes)
I1008 18:02:02.774878 289308 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/key.pem (1679 bytes)
I1008 18:02:02.775503 289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1008 18:02:02.800021 289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1008 18:02:02.824552 289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1008 18:02:02.848749 289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1008 18:02:02.872633 289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1008 18:02:02.895915 289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1008 18:02:02.919464 289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1008 18:02:02.943946 289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1008 18:02:02.968294 289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1008 18:02:02.992275 289308 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1008 18:02:03.010179 289308 ssh_runner.go:195] Run: openssl version
I1008 18:02:03.015711 289308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1008 18:02:03.025098 289308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1008 18:02:03.029017 289308 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 8 18:02 /usr/share/ca-certificates/minikubeCA.pem
I1008 18:02:03.029092 289308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1008 18:02:03.036092 289308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1008 18:02:03.045912 289308 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1008 18:02:03.049314 289308 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1008 18:02:03.049360 289308 kubeadm.go:392] StartCluster: {Name:addons-246349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-246349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1008 18:02:03.049460 289308 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1008 18:02:03.049532 289308 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1008 18:02:03.087401 289308 cri.go:89] found id: ""
I1008 18:02:03.087473 289308 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1008 18:02:03.100262 289308 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1008 18:02:03.109244 289308 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1008 18:02:03.109315 289308 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1008 18:02:03.120882 289308 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1008 18:02:03.120905 289308 kubeadm.go:157] found existing configuration files:
I1008 18:02:03.120956 289308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1008 18:02:03.130866 289308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1008 18:02:03.130939 289308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1008 18:02:03.139493 289308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1008 18:02:03.148879 289308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1008 18:02:03.148953 289308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1008 18:02:03.158295 289308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1008 18:02:03.167468 289308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1008 18:02:03.167534 289308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1008 18:02:03.175939 289308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1008 18:02:03.184741 289308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1008 18:02:03.184818 289308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1008 18:02:03.193713 289308 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1008 18:02:03.234276 289308 kubeadm.go:310] W1008 18:02:03.233565 1031 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I1008 18:02:03.235106 289308 kubeadm.go:310] W1008 18:02:03.234598 1031 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I1008 18:02:03.258811 289308 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
I1008 18:02:03.318083 289308 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1008 18:02:20.675326 289308 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I1008 18:02:20.675386 289308 kubeadm.go:310] [preflight] Running pre-flight checks
I1008 18:02:20.675483 289308 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I1008 18:02:20.675544 289308 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1070-aws[0m
I1008 18:02:20.675584 289308 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I1008 18:02:20.675634 289308 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1008 18:02:20.675684 289308 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1008 18:02:20.675734 289308 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1008 18:02:20.675806 289308 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1008 18:02:20.675865 289308 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1008 18:02:20.675937 289308 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1008 18:02:20.675986 289308 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1008 18:02:20.676052 289308 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1008 18:02:20.676114 289308 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1008 18:02:20.676202 289308 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I1008 18:02:20.676304 289308 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1008 18:02:20.676411 289308 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1008 18:02:20.676480 289308 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1008 18:02:20.679221 289308 out.go:235] - Generating certificates and keys ...
I1008 18:02:20.679319 289308 kubeadm.go:310] [certs] Using existing ca certificate authority
I1008 18:02:20.679389 289308 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I1008 18:02:20.679460 289308 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I1008 18:02:20.679519 289308 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I1008 18:02:20.679582 289308 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I1008 18:02:20.679635 289308 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I1008 18:02:20.679701 289308 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I1008 18:02:20.679820 289308 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-246349 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1008 18:02:20.679875 289308 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I1008 18:02:20.679998 289308 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-246349 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1008 18:02:20.680067 289308 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I1008 18:02:20.680133 289308 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I1008 18:02:20.680181 289308 kubeadm.go:310] [certs] Generating "sa" key and public key
I1008 18:02:20.680239 289308 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1008 18:02:20.680293 289308 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I1008 18:02:20.680352 289308 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1008 18:02:20.680412 289308 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1008 18:02:20.680478 289308 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1008 18:02:20.680535 289308 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1008 18:02:20.680618 289308 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1008 18:02:20.680688 289308 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1008 18:02:20.683385 289308 out.go:235] - Booting up control plane ...
I1008 18:02:20.683493 289308 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1008 18:02:20.683581 289308 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1008 18:02:20.683652 289308 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1008 18:02:20.683756 289308 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1008 18:02:20.683844 289308 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1008 18:02:20.683888 289308 kubeadm.go:310] [kubelet-start] Starting the kubelet
I1008 18:02:20.684019 289308 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1008 18:02:20.684125 289308 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1008 18:02:20.684197 289308 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.500856043s
I1008 18:02:20.684272 289308 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I1008 18:02:20.684333 289308 kubeadm.go:310] [api-check] The API server is healthy after 6.001281442s
I1008 18:02:20.684442 289308 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1008 18:02:20.684569 289308 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1008 18:02:20.684631 289308 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I1008 18:02:20.684811 289308 kubeadm.go:310] [mark-control-plane] Marking the node addons-246349 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1008 18:02:20.684871 289308 kubeadm.go:310] [bootstrap-token] Using token: 0kq8kp.ln4racqss42qwugy
I1008 18:02:20.687570 289308 out.go:235] - Configuring RBAC rules ...
I1008 18:02:20.687707 289308 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1008 18:02:20.687818 289308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1008 18:02:20.687994 289308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1008 18:02:20.688189 289308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1008 18:02:20.688332 289308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1008 18:02:20.688457 289308 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1008 18:02:20.688588 289308 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1008 18:02:20.688646 289308 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I1008 18:02:20.688706 289308 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I1008 18:02:20.688717 289308 kubeadm.go:310]
I1008 18:02:20.688786 289308 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I1008 18:02:20.688795 289308 kubeadm.go:310]
I1008 18:02:20.688873 289308 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I1008 18:02:20.688881 289308 kubeadm.go:310]
I1008 18:02:20.688924 289308 kubeadm.go:310] mkdir -p $HOME/.kube
I1008 18:02:20.688987 289308 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1008 18:02:20.689038 289308 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1008 18:02:20.689042 289308 kubeadm.go:310]
I1008 18:02:20.689101 289308 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I1008 18:02:20.689106 289308 kubeadm.go:310]
I1008 18:02:20.689156 289308 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I1008 18:02:20.689162 289308 kubeadm.go:310]
I1008 18:02:20.689221 289308 kubeadm.go:310] You should now deploy a pod network to the cluster.
I1008 18:02:20.689297 289308 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1008 18:02:20.689376 289308 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1008 18:02:20.689396 289308 kubeadm.go:310]
I1008 18:02:20.689495 289308 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I1008 18:02:20.689580 289308 kubeadm.go:310] and service account keys on each node and then running the following as root:
I1008 18:02:20.689588 289308 kubeadm.go:310]
I1008 18:02:20.689700 289308 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0kq8kp.ln4racqss42qwugy \
I1008 18:02:20.689806 289308 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:b97bb3e8e417b962d820ebc093937d5128e022499abe774f12128a2d4bef5329 \
I1008 18:02:20.689835 289308 kubeadm.go:310] --control-plane
I1008 18:02:20.689845 289308 kubeadm.go:310]
I1008 18:02:20.689961 289308 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I1008 18:02:20.689979 289308 kubeadm.go:310]
I1008 18:02:20.690095 289308 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0kq8kp.ln4racqss42qwugy \
I1008 18:02:20.690264 289308 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:b97bb3e8e417b962d820ebc093937d5128e022499abe774f12128a2d4bef5329
I1008 18:02:20.690280 289308 cni.go:84] Creating CNI manager for ""
I1008 18:02:20.690299 289308 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1008 18:02:20.694837 289308 out.go:177] * Configuring CNI (Container Networking Interface) ...
I1008 18:02:20.697474 289308 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1008 18:02:20.702106 289308 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
I1008 18:02:20.702128 289308 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
I1008 18:02:20.720647 289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1008 18:02:20.997710 289308 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1008 18:02:20.997862 289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-246349 minikube.k8s.io/updated_at=2024_10_08T18_02_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=addons-246349 minikube.k8s.io/primary=true
I1008 18:02:20.997866 289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 18:02:21.005977 289308 ops.go:34] apiserver oom_adj: -16
I1008 18:02:21.129688 289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 18:02:21.630397 289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 18:02:22.129815 289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 18:02:22.630402 289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 18:02:23.130268 289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 18:02:23.630719 289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 18:02:24.129716 289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 18:02:24.629768 289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 18:02:25.130671 289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 18:02:25.241765 289308 kubeadm.go:1113] duration metric: took 4.243964247s to wait for elevateKubeSystemPrivileges
I1008 18:02:25.241793 289308 kubeadm.go:394] duration metric: took 22.19243706s to StartCluster
I1008 18:02:25.241810 289308 settings.go:142] acquiring lock: {Name:mk88999f347ab2e93b53f54a6e8df12c27df7c8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 18:02:25.241932 289308 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19774-283126/kubeconfig
I1008 18:02:25.242321 289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/kubeconfig: {Name:mkc40596aa3771ba8a6c8897a16b531991d7bc01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 18:02:25.242925 289308 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1008 18:02:25.243057 289308 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1008 18:02:25.243297 289308 config.go:182] Loaded profile config "addons-246349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1008 18:02:25.243324 289308 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1008 18:02:25.243395 289308 addons.go:69] Setting yakd=true in profile "addons-246349"
I1008 18:02:25.243409 289308 addons.go:234] Setting addon yakd=true in "addons-246349"
I1008 18:02:25.243431 289308 host.go:66] Checking if "addons-246349" exists ...
I1008 18:02:25.243955 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:25.244341 289308 addons.go:69] Setting metrics-server=true in profile "addons-246349"
I1008 18:02:25.244380 289308 addons.go:234] Setting addon metrics-server=true in "addons-246349"
I1008 18:02:25.244407 289308 host.go:66] Checking if "addons-246349" exists ...
I1008 18:02:25.244852 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:25.246732 289308 addons.go:69] Setting cloud-spanner=true in profile "addons-246349"
I1008 18:02:25.247776 289308 addons.go:234] Setting addon cloud-spanner=true in "addons-246349"
I1008 18:02:25.247937 289308 host.go:66] Checking if "addons-246349" exists ...
I1008 18:02:25.248517 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:25.249401 289308 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-246349"
I1008 18:02:25.249483 289308 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-246349"
I1008 18:02:25.249541 289308 host.go:66] Checking if "addons-246349" exists ...
I1008 18:02:25.250066 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:25.256052 289308 addons.go:69] Setting default-storageclass=true in profile "addons-246349"
I1008 18:02:25.256141 289308 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-246349"
I1008 18:02:25.256562 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:25.247693 289308 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-246349"
I1008 18:02:25.256964 289308 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-246349"
I1008 18:02:25.256997 289308 host.go:66] Checking if "addons-246349" exists ...
I1008 18:02:25.257444 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:25.257595 289308 addons.go:69] Setting gcp-auth=true in profile "addons-246349"
I1008 18:02:25.257618 289308 mustload.go:65] Loading cluster: addons-246349
I1008 18:02:25.257949 289308 config.go:182] Loaded profile config "addons-246349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1008 18:02:25.258195 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:25.247702 289308 addons.go:69] Setting registry=true in profile "addons-246349"
I1008 18:02:25.263705 289308 addons.go:234] Setting addon registry=true in "addons-246349"
I1008 18:02:25.263829 289308 host.go:66] Checking if "addons-246349" exists ...
I1008 18:02:25.264482 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:25.273776 289308 addons.go:69] Setting ingress=true in profile "addons-246349"
I1008 18:02:25.273870 289308 addons.go:234] Setting addon ingress=true in "addons-246349"
I1008 18:02:25.273952 289308 host.go:66] Checking if "addons-246349" exists ...
I1008 18:02:25.247708 289308 addons.go:69] Setting storage-provisioner=true in profile "addons-246349"
I1008 18:02:25.274644 289308 addons.go:234] Setting addon storage-provisioner=true in "addons-246349"
I1008 18:02:25.274729 289308 host.go:66] Checking if "addons-246349" exists ...
I1008 18:02:25.276353 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:25.247717 289308 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-246349"
I1008 18:02:25.277255 289308 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-246349"
I1008 18:02:25.247721 289308 addons.go:69] Setting volcano=true in profile "addons-246349"
I1008 18:02:25.277888 289308 addons.go:234] Setting addon volcano=true in "addons-246349"
I1008 18:02:25.278023 289308 host.go:66] Checking if "addons-246349" exists ...
I1008 18:02:25.276424 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:25.247756 289308 out.go:177] * Verifying Kubernetes components...
I1008 18:02:25.317411 289308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1008 18:02:25.318374 289308 addons.go:69] Setting ingress-dns=true in profile "addons-246349"
I1008 18:02:25.318590 289308 addons.go:234] Setting addon ingress-dns=true in "addons-246349"
I1008 18:02:25.318695 289308 host.go:66] Checking if "addons-246349" exists ...
I1008 18:02:25.319315 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:25.247724 289308 addons.go:69] Setting volumesnapshots=true in profile "addons-246349"
I1008 18:02:25.335526 289308 addons.go:234] Setting addon volumesnapshots=true in "addons-246349"
I1008 18:02:25.335682 289308 host.go:66] Checking if "addons-246349" exists ...
I1008 18:02:25.336279 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:25.368318 289308 addons.go:69] Setting inspektor-gadget=true in profile "addons-246349"
I1008 18:02:25.368351 289308 addons.go:234] Setting addon inspektor-gadget=true in "addons-246349"
I1008 18:02:25.368392 289308 host.go:66] Checking if "addons-246349" exists ...
I1008 18:02:25.368973 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:25.400736 289308 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I1008 18:02:25.405190 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:25.428843 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:25.434426 289308 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I1008 18:02:25.434690 289308 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1008 18:02:25.434707 289308 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1008 18:02:25.434797 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:02:25.469243 289308 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I1008 18:02:25.469318 289308 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1008 18:02:25.473426 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:02:25.477103 289308 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
I1008 18:02:25.477339 289308 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I1008 18:02:25.480299 289308 host.go:66] Checking if "addons-246349" exists ...
I1008 18:02:25.513876 289308 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1008 18:02:25.514065 289308 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
I1008 18:02:25.515055 289308 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1008 18:02:25.515073 289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1008 18:02:25.515142 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:02:25.520176 289308 addons.go:234] Setting addon default-storageclass=true in "addons-246349"
I1008 18:02:25.531070 289308 host.go:66] Checking if "addons-246349" exists ...
I1008 18:02:25.529263 289308 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I1008 18:02:25.529282 289308 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1008 18:02:25.531847 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:25.537869 289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1008 18:02:25.538348 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:02:25.547417 289308 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I1008 18:02:25.537943 289308 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1008 18:02:25.547470 289308 out.go:177] - Using image docker.io/registry:2.8.3
I1008 18:02:25.550836 289308 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I1008 18:02:25.547478 289308 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1008 18:02:25.551076 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:02:25.547484 289308 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1008 18:02:25.551586 289308 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I1008 18:02:25.551855 289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1008 18:02:25.551945 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:02:25.573603 289308 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1008 18:02:25.573772 289308 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I1008 18:02:25.581903 289308 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1008 18:02:25.581932 289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1008 18:02:25.582003 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:02:25.589006 289308 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-246349"
I1008 18:02:25.589098 289308 host.go:66] Checking if "addons-246349" exists ...
I1008 18:02:25.589565 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:25.616815 289308 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I1008 18:02:25.619628 289308 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1008 18:02:25.619650 289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1008 18:02:25.619718 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:02:25.622775 289308 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1008 18:02:25.625493 289308 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1008 18:02:25.632973 289308 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1008 18:02:25.634288 289308 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1008 18:02:25.634310 289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I1008 18:02:25.634398 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:02:25.638305 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:02:25.643644 289308 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
I1008 18:02:25.643719 289308 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1008 18:02:25.647977 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:02:25.649615 289308 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I1008 18:02:25.649641 289308 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I1008 18:02:25.649848 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:02:25.662287 289308 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1008 18:02:25.665110 289308 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1008 18:02:25.671900 289308 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1008 18:02:25.671924 289308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1008 18:02:25.672002 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:02:25.705203 289308 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
I1008 18:02:25.713855 289308 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
I1008 18:02:25.717761 289308 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
I1008 18:02:25.728928 289308 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I1008 18:02:25.728951 289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
I1008 18:02:25.729021 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:02:25.729502 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:02:25.759742 289308 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I1008 18:02:25.759765 289308 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1008 18:02:25.759828 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:02:25.768802 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:02:25.769609 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:02:25.785085 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:02:25.785202 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:02:25.786905 289308 out.go:177] - Using image docker.io/busybox:stable
I1008 18:02:25.793136 289308 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1008 18:02:25.799449 289308 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1008 18:02:25.799472 289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1008 18:02:25.799539 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:02:25.844143 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:02:25.872160 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:02:25.875786 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:02:25.888058 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:02:25.888799 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:02:25.890127 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
W1008 18:02:25.893932 289308 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I1008 18:02:25.893961 289308 retry.go:31] will retry after 177.194408ms: ssh: handshake failed: EOF
I1008 18:02:25.902732 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:02:26.309381 289308 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1008 18:02:26.309508 289308 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.066432912s)
I1008 18:02:26.309735 289308 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1008 18:02:26.323142 289308 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I1008 18:02:26.323162 289308 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1008 18:02:26.386165 289308 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I1008 18:02:26.386234 289308 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1008 18:02:26.558849 289308 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I1008 18:02:26.558870 289308 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1008 18:02:26.570907 289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I1008 18:02:26.602941 289308 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I1008 18:02:26.603006 289308 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I1008 18:02:26.619588 289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1008 18:02:26.627916 289308 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1008 18:02:26.627945 289308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1008 18:02:26.642559 289308 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1008 18:02:26.642587 289308 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1008 18:02:26.652091 289308 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1008 18:02:26.652124 289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1008 18:02:26.664379 289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1008 18:02:26.719683 289308 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I1008 18:02:26.719709 289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1008 18:02:26.729248 289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1008 18:02:26.730692 289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1008 18:02:26.778709 289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1008 18:02:26.813132 289308 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I1008 18:02:26.813161 289308 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I1008 18:02:26.829269 289308 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1008 18:02:26.829298 289308 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1008 18:02:26.864306 289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1008 18:02:26.876827 289308 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I1008 18:02:26.876856 289308 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1008 18:02:26.924619 289308 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1008 18:02:26.924652 289308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1008 18:02:26.971450 289308 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1008 18:02:26.971512 289308 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1008 18:02:26.992496 289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1008 18:02:27.037277 289308 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1008 18:02:27.037324 289308 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1008 18:02:27.088211 289308 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I1008 18:02:27.088253 289308 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I1008 18:02:27.140198 289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1008 18:02:27.170808 289308 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1008 18:02:27.170836 289308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1008 18:02:27.207675 289308 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I1008 18:02:27.207699 289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1008 18:02:27.212796 289308 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1008 18:02:27.212820 289308 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1008 18:02:27.245891 289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1008 18:02:27.277990 289308 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I1008 18:02:27.278013 289308 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I1008 18:02:27.346165 289308 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1008 18:02:27.346187 289308 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1008 18:02:27.385334 289308 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1008 18:02:27.385357 289308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1008 18:02:27.438941 289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1008 18:02:27.470830 289308 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I1008 18:02:27.470909 289308 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I1008 18:02:27.534797 289308 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1008 18:02:27.534872 289308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1008 18:02:27.597321 289308 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1008 18:02:27.597403 289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1008 18:02:27.639973 289308 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
I1008 18:02:27.640052 289308 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
I1008 18:02:27.722545 289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1008 18:02:27.766993 289308 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1008 18:02:27.767069 289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1008 18:02:27.779359 289308 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I1008 18:02:27.779438 289308 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I1008 18:02:27.920743 289308 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I1008 18:02:27.920822 289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
I1008 18:02:27.971136 289308 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.661370535s)
I1008 18:02:27.971252 289308 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.66184293s)
I1008 18:02:27.971219 289308 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I1008 18:02:27.973404 289308 node_ready.go:35] waiting up to 6m0s for node "addons-246349" to be "Ready" ...
I1008 18:02:27.977906 289308 node_ready.go:49] node "addons-246349" has status "Ready":"True"
I1008 18:02:27.977978 289308 node_ready.go:38] duration metric: took 4.504105ms for node "addons-246349" to be "Ready" ...
I1008 18:02:27.978004 289308 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1008 18:02:27.992605 289308 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vxnx7" in "kube-system" namespace to be "Ready" ...
I1008 18:02:28.165417 289308 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1008 18:02:28.165506 289308 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1008 18:02:28.304573 289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I1008 18:02:28.475505 289308 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-246349" context rescaled to 1 replicas
I1008 18:02:28.482585 289308 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1008 18:02:28.482653 289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1008 18:02:28.908153 289308 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1008 18:02:28.908174 289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1008 18:02:29.358635 289308 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1008 18:02:29.358710 289308 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1008 18:02:29.661783 289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1008 18:02:30.003579 289308 pod_ready.go:103] pod "coredns-7c65d6cfc9-vxnx7" in "kube-system" namespace has status "Ready":"False"
I1008 18:02:32.020344 289308 pod_ready.go:103] pod "coredns-7c65d6cfc9-vxnx7" in "kube-system" namespace has status "Ready":"False"
I1008 18:02:32.740229 289308 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1008 18:02:32.740315 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:02:32.767059 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:02:33.065695 289308 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1008 18:02:33.242637 289308 addons.go:234] Setting addon gcp-auth=true in "addons-246349"
I1008 18:02:33.242742 289308 host.go:66] Checking if "addons-246349" exists ...
I1008 18:02:33.243312 289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
I1008 18:02:33.267488 289308 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1008 18:02:33.267541 289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
I1008 18:02:33.305427 289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
I1008 18:02:34.522724 289308 pod_ready.go:103] pod "coredns-7c65d6cfc9-vxnx7" in "kube-system" namespace has status "Ready":"False"
I1008 18:02:36.069567 289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.449939286s)
I1008 18:02:36.069617 289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.405214271s)
I1008 18:02:36.069857 289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.340583538s)
I1008 18:02:36.069894 289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.339181697s)
I1008 18:02:36.069946 289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.291212479s)
I1008 18:02:36.070069 289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.205738359s)
I1008 18:02:36.070162 289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.077605735s)
I1008 18:02:36.070176 289308 addons.go:475] Verifying addon ingress=true in "addons-246349"
I1008 18:02:36.070308 289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.49933368s)
I1008 18:02:36.070357 289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.930131321s)
I1008 18:02:36.070371 289308 addons.go:475] Verifying addon registry=true in "addons-246349"
I1008 18:02:36.070707 289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.824777843s)
I1008 18:02:36.071133 289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.632116065s)
I1008 18:02:36.071162 289308 addons.go:475] Verifying addon metrics-server=true in "addons-246349"
I1008 18:02:36.071261 289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.348626898s)
W1008 18:02:36.071291 289308 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1008 18:02:36.071307 289308 retry.go:31] will retry after 127.618052ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1008 18:02:36.071384 289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.766729088s)
I1008 18:02:36.071562 289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.409702163s)
I1008 18:02:36.071576 289308 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-246349"
I1008 18:02:36.071732 289308 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.804222853s)
I1008 18:02:36.072926 289308 out.go:177] * Verifying ingress addon...
I1008 18:02:36.074270 289308 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-246349 service yakd-dashboard -n yakd-dashboard
I1008 18:02:36.074301 289308 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I1008 18:02:36.074314 289308 out.go:177] * Verifying csi-hostpath-driver addon...
I1008 18:02:36.074335 289308 out.go:177] * Verifying registry addon...
I1008 18:02:36.076225 289308 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1008 18:02:36.078526 289308 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1008 18:02:36.079538 289308 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1008 18:02:36.080952 289308 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I1008 18:02:36.081979 289308 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1008 18:02:36.082002 289308 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1008 18:02:36.111483 289308 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1008 18:02:36.111512 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:36.112913 289308 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1008 18:02:36.112940 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:36.113878 289308 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1008 18:02:36.113903 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
W1008 18:02:36.162527 289308 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
I1008 18:02:36.168723 289308 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1008 18:02:36.168750 289308 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1008 18:02:36.199481 289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1008 18:02:36.266587 289308 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1008 18:02:36.266609 289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1008 18:02:36.326074 289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1008 18:02:36.585892 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:36.586808 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:36.587941 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:37.000140 289308 pod_ready.go:103] pod "coredns-7c65d6cfc9-vxnx7" in "kube-system" namespace has status "Ready":"False"
I1008 18:02:37.083229 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:37.084935 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:37.086565 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:37.593620 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:37.595023 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:37.597137 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:37.867675 289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.668147516s)
I1008 18:02:37.867762 289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.541666501s)
I1008 18:02:37.870478 289308 addons.go:475] Verifying addon gcp-auth=true in "addons-246349"
I1008 18:02:37.874839 289308 out.go:177] * Verifying gcp-auth addon...
I1008 18:02:37.876958 289308 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1008 18:02:37.880644 289308 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1008 18:02:38.081951 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:38.086394 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:38.088093 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:38.588109 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:38.590226 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:38.591571 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:39.001299 289308 pod_ready.go:103] pod "coredns-7c65d6cfc9-vxnx7" in "kube-system" namespace has status "Ready":"False"
I1008 18:02:39.084075 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:39.086659 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:39.088444 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:39.500588 289308 pod_ready.go:93] pod "coredns-7c65d6cfc9-vxnx7" in "kube-system" namespace has status "Ready":"True"
I1008 18:02:39.500657 289308 pod_ready.go:82] duration metric: took 11.507975652s for pod "coredns-7c65d6cfc9-vxnx7" in "kube-system" namespace to be "Ready" ...
I1008 18:02:39.500687 289308 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wn7rk" in "kube-system" namespace to be "Ready" ...
I1008 18:02:39.503827 289308 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-wn7rk" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-wn7rk" not found
I1008 18:02:39.503896 289308 pod_ready.go:82] duration metric: took 3.186282ms for pod "coredns-7c65d6cfc9-wn7rk" in "kube-system" namespace to be "Ready" ...
E1008 18:02:39.503923 289308 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-wn7rk" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-wn7rk" not found
I1008 18:02:39.503951 289308 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-246349" in "kube-system" namespace to be "Ready" ...
I1008 18:02:39.510760 289308 pod_ready.go:93] pod "etcd-addons-246349" in "kube-system" namespace has status "Ready":"True"
I1008 18:02:39.510852 289308 pod_ready.go:82] duration metric: took 6.875209ms for pod "etcd-addons-246349" in "kube-system" namespace to be "Ready" ...
I1008 18:02:39.510885 289308 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-246349" in "kube-system" namespace to be "Ready" ...
I1008 18:02:39.518175 289308 pod_ready.go:93] pod "kube-apiserver-addons-246349" in "kube-system" namespace has status "Ready":"True"
I1008 18:02:39.518255 289308 pod_ready.go:82] duration metric: took 7.341828ms for pod "kube-apiserver-addons-246349" in "kube-system" namespace to be "Ready" ...
I1008 18:02:39.518286 289308 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-246349" in "kube-system" namespace to be "Ready" ...
I1008 18:02:39.525756 289308 pod_ready.go:93] pod "kube-controller-manager-addons-246349" in "kube-system" namespace has status "Ready":"True"
I1008 18:02:39.525832 289308 pod_ready.go:82] duration metric: took 7.523404ms for pod "kube-controller-manager-addons-246349" in "kube-system" namespace to be "Ready" ...
I1008 18:02:39.525859 289308 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pjcqn" in "kube-system" namespace to be "Ready" ...
I1008 18:02:39.587923 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:39.589640 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:39.591311 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:39.696994 289308 pod_ready.go:93] pod "kube-proxy-pjcqn" in "kube-system" namespace has status "Ready":"True"
I1008 18:02:39.697072 289308 pod_ready.go:82] duration metric: took 171.190843ms for pod "kube-proxy-pjcqn" in "kube-system" namespace to be "Ready" ...
I1008 18:02:39.697100 289308 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-246349" in "kube-system" namespace to be "Ready" ...
I1008 18:02:40.087101 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:40.089932 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:40.092050 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:40.097343 289308 pod_ready.go:93] pod "kube-scheduler-addons-246349" in "kube-system" namespace has status "Ready":"True"
I1008 18:02:40.097420 289308 pod_ready.go:82] duration metric: took 400.297504ms for pod "kube-scheduler-addons-246349" in "kube-system" namespace to be "Ready" ...
I1008 18:02:40.097452 289308 pod_ready.go:39] duration metric: took 12.119420424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1008 18:02:40.097482 289308 api_server.go:52] waiting for apiserver process to appear ...
I1008 18:02:40.097565 289308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1008 18:02:40.117964 289308 api_server.go:72] duration metric: took 14.875001195s to wait for apiserver process to appear ...
I1008 18:02:40.118000 289308 api_server.go:88] waiting for apiserver healthz status ...
I1008 18:02:40.118041 289308 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I1008 18:02:40.127775 289308 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I1008 18:02:40.129055 289308 api_server.go:141] control plane version: v1.31.1
I1008 18:02:40.129083 289308 api_server.go:131] duration metric: took 11.07557ms to wait for apiserver health ...
I1008 18:02:40.129092 289308 system_pods.go:43] waiting for kube-system pods to appear ...
I1008 18:02:40.305146 289308 system_pods.go:59] 18 kube-system pods found
I1008 18:02:40.305232 289308 system_pods.go:61] "coredns-7c65d6cfc9-vxnx7" [c1e07fdc-33dc-435e-8e40-b069244eacdf] Running
I1008 18:02:40.305258 289308 system_pods.go:61] "csi-hostpath-attacher-0" [c25c864d-62e5-4fb6-a29a-66844e47450e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1008 18:02:40.305288 289308 system_pods.go:61] "csi-hostpath-resizer-0" [6e8d3e30-cd5e-4a0e-942f-b1de57d6c2f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1008 18:02:40.305416 289308 system_pods.go:61] "csi-hostpathplugin-l5bvz" [18c1aa06-c0d9-4d44-883f-dae66d7ce26d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1008 18:02:40.305439 289308 system_pods.go:61] "etcd-addons-246349" [0e2aebbb-383f-4438-8999-b8a36478fbca] Running
I1008 18:02:40.305458 289308 system_pods.go:61] "kindnet-xj6p9" [4aa3675d-fdc3-4086-b0a6-acb881b72a93] Running
I1008 18:02:40.305480 289308 system_pods.go:61] "kube-apiserver-addons-246349" [d9447c16-3440-4a87-b58c-3bbadb85362b] Running
I1008 18:02:40.305514 289308 system_pods.go:61] "kube-controller-manager-addons-246349" [0612d7b3-5fc9-41b8-9e67-9dd8d7fb4035] Running
I1008 18:02:40.305540 289308 system_pods.go:61] "kube-ingress-dns-minikube" [9252a2f3-dbf3-4e58-a28e-ea4af078c472] Running
I1008 18:02:40.305561 289308 system_pods.go:61] "kube-proxy-pjcqn" [58e34b16-87f2-4137-9806-e0bb53cda95f] Running
I1008 18:02:40.305586 289308 system_pods.go:61] "kube-scheduler-addons-246349" [10bdb84b-19ac-45d9-8387-85d41da96479] Running
I1008 18:02:40.305611 289308 system_pods.go:61] "metrics-server-84c5f94fbc-4g8nz" [b47dc422-e583-458b-a57a-f97fb1c1ea0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1008 18:02:40.305634 289308 system_pods.go:61] "nvidia-device-plugin-daemonset-5d4vx" [dafed154-2336-4889-8370-c2b31d4fc071] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1008 18:02:40.305660 289308 system_pods.go:61] "registry-66c9cd494c-8tr5n" [0ecafdb8-54b7-4fd2-a93c-946dbacc3308] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1008 18:02:40.305892 289308 system_pods.go:61] "registry-proxy-827n9" [5050ac4c-9bae-47a6-9b15-3fd5cae17f26] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1008 18:02:40.305928 289308 system_pods.go:61] "snapshot-controller-56fcc65765-8d9jf" [fcb13f15-bbd9-4771-ab2f-c874fe39749b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1008 18:02:40.305952 289308 system_pods.go:61] "snapshot-controller-56fcc65765-mrrwj" [7c68d0d9-5902-4818-87ed-4a154c4cfafd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1008 18:02:40.305974 289308 system_pods.go:61] "storage-provisioner" [3a74ef82-6a22-47bf-bfad-22f738d724d6] Running
I1008 18:02:40.305999 289308 system_pods.go:74] duration metric: took 176.899138ms to wait for pod list to return data ...
I1008 18:02:40.306022 289308 default_sa.go:34] waiting for default service account to be created ...
I1008 18:02:40.496472 289308 default_sa.go:45] found service account: "default"
I1008 18:02:40.496548 289308 default_sa.go:55] duration metric: took 190.503204ms for default service account to be created ...
I1008 18:02:40.496573 289308 system_pods.go:116] waiting for k8s-apps to be running ...
I1008 18:02:40.586247 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:40.586927 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:40.587840 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:40.703414 289308 system_pods.go:86] 18 kube-system pods found
I1008 18:02:40.703518 289308 system_pods.go:89] "coredns-7c65d6cfc9-vxnx7" [c1e07fdc-33dc-435e-8e40-b069244eacdf] Running
I1008 18:02:40.703545 289308 system_pods.go:89] "csi-hostpath-attacher-0" [c25c864d-62e5-4fb6-a29a-66844e47450e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1008 18:02:40.703589 289308 system_pods.go:89] "csi-hostpath-resizer-0" [6e8d3e30-cd5e-4a0e-942f-b1de57d6c2f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1008 18:02:40.703624 289308 system_pods.go:89] "csi-hostpathplugin-l5bvz" [18c1aa06-c0d9-4d44-883f-dae66d7ce26d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1008 18:02:40.703648 289308 system_pods.go:89] "etcd-addons-246349" [0e2aebbb-383f-4438-8999-b8a36478fbca] Running
I1008 18:02:40.703676 289308 system_pods.go:89] "kindnet-xj6p9" [4aa3675d-fdc3-4086-b0a6-acb881b72a93] Running
I1008 18:02:40.703710 289308 system_pods.go:89] "kube-apiserver-addons-246349" [d9447c16-3440-4a87-b58c-3bbadb85362b] Running
I1008 18:02:40.703743 289308 system_pods.go:89] "kube-controller-manager-addons-246349" [0612d7b3-5fc9-41b8-9e67-9dd8d7fb4035] Running
I1008 18:02:40.703766 289308 system_pods.go:89] "kube-ingress-dns-minikube" [9252a2f3-dbf3-4e58-a28e-ea4af078c472] Running
I1008 18:02:40.703791 289308 system_pods.go:89] "kube-proxy-pjcqn" [58e34b16-87f2-4137-9806-e0bb53cda95f] Running
I1008 18:02:40.703824 289308 system_pods.go:89] "kube-scheduler-addons-246349" [10bdb84b-19ac-45d9-8387-85d41da96479] Running
I1008 18:02:40.703859 289308 system_pods.go:89] "metrics-server-84c5f94fbc-4g8nz" [b47dc422-e583-458b-a57a-f97fb1c1ea0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1008 18:02:40.703884 289308 system_pods.go:89] "nvidia-device-plugin-daemonset-5d4vx" [dafed154-2336-4889-8370-c2b31d4fc071] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1008 18:02:40.703911 289308 system_pods.go:89] "registry-66c9cd494c-8tr5n" [0ecafdb8-54b7-4fd2-a93c-946dbacc3308] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1008 18:02:40.703945 289308 system_pods.go:89] "registry-proxy-827n9" [5050ac4c-9bae-47a6-9b15-3fd5cae17f26] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1008 18:02:40.703975 289308 system_pods.go:89] "snapshot-controller-56fcc65765-8d9jf" [fcb13f15-bbd9-4771-ab2f-c874fe39749b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1008 18:02:40.704001 289308 system_pods.go:89] "snapshot-controller-56fcc65765-mrrwj" [7c68d0d9-5902-4818-87ed-4a154c4cfafd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1008 18:02:40.704023 289308 system_pods.go:89] "storage-provisioner" [3a74ef82-6a22-47bf-bfad-22f738d724d6] Running
I1008 18:02:40.704062 289308 system_pods.go:126] duration metric: took 207.459055ms to wait for k8s-apps to be running ...
I1008 18:02:40.704093 289308 system_svc.go:44] waiting for kubelet service to be running ....
I1008 18:02:40.704186 289308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1008 18:02:40.717770 289308 system_svc.go:56] duration metric: took 13.652449ms WaitForService to wait for kubelet
I1008 18:02:40.717800 289308 kubeadm.go:582] duration metric: took 15.474842467s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1008 18:02:40.717819 289308 node_conditions.go:102] verifying NodePressure condition ...
I1008 18:02:40.896968 289308 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I1008 18:02:40.897006 289308 node_conditions.go:123] node cpu capacity is 2
I1008 18:02:40.897025 289308 node_conditions.go:105] duration metric: took 179.200113ms to run NodePressure ...
I1008 18:02:40.897039 289308 start.go:241] waiting for startup goroutines ...
I1008 18:02:41.086098 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:41.086983 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:41.088737 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:41.583876 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:41.587568 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:41.589589 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:42.086674 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:42.088137 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:42.090774 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:42.583338 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:42.585033 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:42.586552 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:43.088634 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:43.089584 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:43.090860 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:43.585086 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:43.591513 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:43.594234 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:44.081112 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:44.083448 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:44.086259 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:44.587682 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:44.589560 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:44.590586 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:45.089657 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:45.091319 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:45.093742 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:45.582474 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:45.587684 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:45.589764 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:46.082765 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:46.085569 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:46.086015 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:46.584126 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:46.587118 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:46.590316 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:47.087844 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:47.089398 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:47.090605 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:47.584014 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:47.586544 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:47.589058 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:48.081460 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:48.085597 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:48.086486 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:48.581153 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:48.584262 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:48.585004 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:49.084897 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:49.086141 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:49.087470 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:49.582040 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:49.584691 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:49.584803 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:50.085792 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:50.086748 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:50.088154 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:50.584782 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:50.586106 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:50.586236 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:51.086729 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:51.087943 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:51.089607 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:51.585181 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:51.586159 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:51.587729 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:52.082132 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:52.084202 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:52.086512 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:52.581390 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:52.584675 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:52.585911 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:53.082446 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:53.083857 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:53.085237 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:53.584420 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:53.584838 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:53.586916 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:54.083620 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:54.085078 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:54.087183 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:54.594447 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:54.595108 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:54.596038 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:55.085467 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:55.086654 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:55.089498 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:55.585742 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:55.586819 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:55.587477 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:56.086428 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:56.088050 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:56.089908 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:56.581836 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:56.584600 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:56.586187 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:57.083751 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:57.084468 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:57.085833 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:57.586579 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:57.588398 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:57.589854 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:58.087371 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:58.089275 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:58.091229 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:58.590987 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:58.591381 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:58.592797 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:59.088635 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:59.090307 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:02:59.091475 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:59.583695 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:02:59.584786 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:02:59.585941 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:00.099280 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:03:00.101022 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:00.103069 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:00.584481 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:03:00.585579 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:00.586234 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:01.081631 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:01.084376 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:03:01.085616 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:01.585241 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:01.587169 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:03:01.588429 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:02.081507 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:02.084786 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:02.085571 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:03:02.585313 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:03:02.586913 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:02.592334 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:03.083196 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:03.085610 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1008 18:03:03.087922 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:03.589129 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:03.590682 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:03.594750 289308 kapi.go:107] duration metric: took 27.515207041s to wait for kubernetes.io/minikube-addons=registry ...
I1008 18:03:04.082354 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:04.084679 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:04.584987 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:04.586669 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:05.081951 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:05.086231 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:05.587648 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:05.589407 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:06.083756 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:06.084291 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:06.582580 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:06.585250 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:07.081421 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:07.085211 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:07.587840 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:07.590030 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:08.081895 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:08.084144 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:08.584382 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:08.585170 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:09.082120 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:09.084749 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:09.587013 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:09.589037 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:10.084794 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:10.086301 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:10.583420 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:10.585352 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:11.080829 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:11.084030 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:11.582527 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:11.585402 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:12.082179 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:12.084411 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:12.582480 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:12.585135 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:13.081783 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:13.084231 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:13.582626 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:13.583837 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:14.081528 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:14.084425 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:14.582387 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:14.585573 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:15.083441 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:15.085141 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:15.584379 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:15.585165 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:16.083509 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:16.085260 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:16.582205 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:16.585651 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:17.082167 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:17.083858 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:17.619448 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:17.620816 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:18.086518 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:18.086700 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:18.581215 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:18.584257 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:19.083440 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:19.084817 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:19.588262 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:19.589248 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:20.081502 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:20.085030 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:20.585384 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:20.587776 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:21.082162 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:21.083906 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:21.584043 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:21.584470 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:22.081812 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:22.085233 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:22.582013 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:22.583737 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:23.083187 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:23.084508 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:23.581423 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:23.584227 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:24.083644 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:24.085280 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1008 18:03:24.589794 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:24.591420 289308 kapi.go:107] duration metric: took 48.512891684s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1008 18:03:25.081278 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:25.581852 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:26.081333 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:26.580855 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:27.081823 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:27.582176 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:28.080609 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:28.581290 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:29.080843 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:29.581540 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:30.083549 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:30.581944 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:31.080868 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:31.580480 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:32.080553 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:32.581027 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:33.080741 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:33.581496 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:34.081322 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:34.581249 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:35.080901 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:35.581517 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:36.081235 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:36.581100 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:37.081601 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:37.586182 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:38.081729 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:38.581256 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:39.081756 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:39.581091 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:40.082622 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:40.587040 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:41.082359 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:41.582475 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:42.082043 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:42.580834 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:43.081803 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:43.581574 289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1008 18:03:44.084053 289308 kapi.go:107] duration metric: took 1m8.007809843s to wait for app.kubernetes.io/name=ingress-nginx ...
I1008 18:04:00.882190 289308 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1008 18:04:00.882218 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1008 18:04:01.380749 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1008 18:04:01.880825 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1008 18:04:02.381271 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1008 18:04:02.880475 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1008 18:04:03.380867 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1008 18:04:03.881280 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1008 18:04:04.382155 289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1008 18:04:04.881498 289308 kapi.go:107] duration metric: took 1m27.004537794s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1008 18:04:04.883187 289308 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-246349 cluster.
I1008 18:04:04.888309 289308 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1008 18:04:04.889847 289308 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1008 18:04:04.891678 289308 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, volcano, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1008 18:04:04.893279 289308 addons.go:510] duration metric: took 1m39.649951132s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner volcano metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1008 18:04:04.893324 289308 start.go:246] waiting for cluster config update ...
I1008 18:04:04.893347 289308 start.go:255] writing updated cluster config ...
I1008 18:04:04.893647 289308 ssh_runner.go:195] Run: rm -f paused
I1008 18:04:05.282068 289308 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I1008 18:04:05.284561 289308 out.go:177] * Done! kubectl is now configured to use "addons-246349" cluster and "default" namespace by default
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
980521b588716 6ef582f3ec844 3 minutes ago Running gcp-auth 0 8c7c7b3f98bc1 gcp-auth-89d5ffd79-bwm7k
8749228d9fe10 289a818c8d9c5 3 minutes ago Running controller 0 1bc344558ca0e ingress-nginx-controller-bc57996ff-rx4wd
bcf4603039a16 ee6d597e62dc8 3 minutes ago Running csi-snapshotter 0 a95fcb2cd959c csi-hostpathplugin-l5bvz
6608913f541df 642ded511e141 4 minutes ago Running csi-provisioner 0 a95fcb2cd959c csi-hostpathplugin-l5bvz
69702d1e60181 922312104da8a 4 minutes ago Running liveness-probe 0 a95fcb2cd959c csi-hostpathplugin-l5bvz
604a87afc74d9 08f6b2990811a 4 minutes ago Running hostpath 0 a95fcb2cd959c csi-hostpathplugin-l5bvz
683b562990719 0107d56dbc0be 4 minutes ago Running node-driver-registrar 0 a95fcb2cd959c csi-hostpathplugin-l5bvz
41870c1927975 1a9605c872c1d 4 minutes ago Running admission 0 874fcd0594ac7 volcano-admission-5874dfdd79-hpn22
d1a47fea008ea 6aa88c604f2b4 4 minutes ago Running volcano-scheduler 0 95777b431ae0e volcano-scheduler-6c9778cbdf-65r4d
e2cecd581940c 9a80d518f102c 4 minutes ago Running csi-attacher 0 2cc3adcd6fd41 csi-hostpath-attacher-0
6428cade97dde 487fa743e1e22 4 minutes ago Running csi-resizer 0 495a3424e411f csi-hostpath-resizer-0
5b5bc1dd22a92 1461903ec4fe9 4 minutes ago Running csi-external-health-monitor-controller 0 a95fcb2cd959c csi-hostpathplugin-l5bvz
29801a6f7ef4a 23cbb28ae641a 4 minutes ago Running volcano-controllers 0 f365ab3fde634 volcano-controllers-789ffc5785-qbzrm
91d0ecd4b69c5 420193b27261a 4 minutes ago Exited patch 0 d8a2222c1ce83 ingress-nginx-admission-patch-nm6hq
a181e62db6600 420193b27261a 4 minutes ago Exited create 0 dce95640fb69c ingress-nginx-admission-create-f8ktk
dcae22117cba6 7ce2150c8929b 4 minutes ago Running local-path-provisioner 0 81cc3e0d2676a local-path-provisioner-86d989889c-wkjcc
1df1c52b7b5da f7ed138f698f6 4 minutes ago Running registry-proxy 0 fc1a802e25193 registry-proxy-827n9
7aa2fb39c4b18 4d1e5c3e97420 4 minutes ago Running volume-snapshot-controller 0 515496cb3ec2e snapshot-controller-56fcc65765-mrrwj
ed7a2c782a48e 4d1e5c3e97420 4 minutes ago Running volume-snapshot-controller 0 e6f554b38e9f9 snapshot-controller-56fcc65765-8d9jf
b055f8b51c26d 5548a49bb60ba 4 minutes ago Running metrics-server 0 774f7f31fb739 metrics-server-84c5f94fbc-4g8nz
b80ed2be5cfc8 77bdba588b953 4 minutes ago Running yakd 0 2fdad532661ad yakd-dashboard-67d98fc6b-8ztv6
76ab47c2184b6 c9cf76bb104e1 4 minutes ago Running registry 0 131a74c775757 registry-66c9cd494c-8tr5n
2c6544a6f9b23 be9cac3585579 4 minutes ago Running cloud-spanner-emulator 0 d8d0d21134de7 cloud-spanner-emulator-5b584cc74-b4d47
13ac29a0e4d85 a9bac31a5be8d 4 minutes ago Running nvidia-device-plugin-ctr 0 3b4770a1bce87 nvidia-device-plugin-daemonset-5d4vx
04ec1433e8816 68de1ddeaded8 4 minutes ago Running gadget 0 c1378e829b327 gadget-nff5l
97c83e1876804 35508c2f890c4 4 minutes ago Running minikube-ingress-dns 0 f6e5f4a604687 kube-ingress-dns-minikube
6c2b94ff7a984 2f6c962e7b831 4 minutes ago Running coredns 0 791dfc7f03e80 coredns-7c65d6cfc9-vxnx7
7691651a94691 ba04bb24b9575 4 minutes ago Running storage-provisioner 0 f70e654436b88 storage-provisioner
a0ee92e9cb26f 6a23fa8fd2b78 4 minutes ago Running kindnet-cni 0 391ff04a49caf kindnet-xj6p9
05eb020f1ea0a 24a140c548c07 4 minutes ago Running kube-proxy 0 b69a32ea58af3 kube-proxy-pjcqn
7bf20a531418a 7f8aa378bb47d 5 minutes ago Running kube-scheduler 0 3246de140011d kube-scheduler-addons-246349
51513c8a85f77 27e3830e14027 5 minutes ago Running etcd 0 57857b77c4fab etcd-addons-246349
931e105ba9202 d3f53a98c0a9d 5 minutes ago Running kube-apiserver 0 34aea09aab90c kube-apiserver-addons-246349
84783b5587d4d 279f381cb3736 5 minutes ago Running kube-controller-manager 0 9a9652d2200d9 kube-controller-manager-addons-246349
==> containerd <==
Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.088107393Z" level=info msg="TearDown network for sandbox \"b7992de2266a84aabb6ae18ef8e2d29a1cdc10237a7800b9af536693cffd5a8f\" successfully"
Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.088147489Z" level=info msg="StopPodSandbox for \"b7992de2266a84aabb6ae18ef8e2d29a1cdc10237a7800b9af536693cffd5a8f\" returns successfully"
Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.088842242Z" level=info msg="RemovePodSandbox for \"b7992de2266a84aabb6ae18ef8e2d29a1cdc10237a7800b9af536693cffd5a8f\""
Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.088889739Z" level=info msg="Forcibly stopping sandbox \"b7992de2266a84aabb6ae18ef8e2d29a1cdc10237a7800b9af536693cffd5a8f\""
Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.102918196Z" level=info msg="TearDown network for sandbox \"b7992de2266a84aabb6ae18ef8e2d29a1cdc10237a7800b9af536693cffd5a8f\" successfully"
Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.109304413Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7992de2266a84aabb6ae18ef8e2d29a1cdc10237a7800b9af536693cffd5a8f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.109460047Z" level=info msg="RemovePodSandbox \"b7992de2266a84aabb6ae18ef8e2d29a1cdc10237a7800b9af536693cffd5a8f\" returns successfully"
Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.110662848Z" level=info msg="StopPodSandbox for \"fea6f165bafcf5bf61e8630bc29f6222449f81f1a4047752d8ac31a3927d9c61\""
Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.118847744Z" level=info msg="TearDown network for sandbox \"fea6f165bafcf5bf61e8630bc29f6222449f81f1a4047752d8ac31a3927d9c61\" successfully"
Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.118888931Z" level=info msg="StopPodSandbox for \"fea6f165bafcf5bf61e8630bc29f6222449f81f1a4047752d8ac31a3927d9c61\" returns successfully"
Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.119563485Z" level=info msg="RemovePodSandbox for \"fea6f165bafcf5bf61e8630bc29f6222449f81f1a4047752d8ac31a3927d9c61\""
Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.119598986Z" level=info msg="Forcibly stopping sandbox \"fea6f165bafcf5bf61e8630bc29f6222449f81f1a4047752d8ac31a3927d9c61\""
Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.127420252Z" level=info msg="TearDown network for sandbox \"fea6f165bafcf5bf61e8630bc29f6222449f81f1a4047752d8ac31a3927d9c61\" successfully"
Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.134002289Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fea6f165bafcf5bf61e8630bc29f6222449f81f1a4047752d8ac31a3927d9c61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.134181873Z" level=info msg="RemovePodSandbox \"fea6f165bafcf5bf61e8630bc29f6222449f81f1a4047752d8ac31a3927d9c61\" returns successfully"
Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.138713671Z" level=info msg="RemoveContainer for \"5803c163e00cf0eee0bd350a8b9db4f15ec0256048cdf96f0db0def1f72dea5a\""
Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.145524890Z" level=info msg="RemoveContainer for \"5803c163e00cf0eee0bd350a8b9db4f15ec0256048cdf96f0db0def1f72dea5a\" returns successfully"
Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.147512145Z" level=info msg="StopPodSandbox for \"82faddd4be9818c16208c8eb816bc8094e3498867a076b9a8b25ad806c17046b\""
Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.155200970Z" level=info msg="TearDown network for sandbox \"82faddd4be9818c16208c8eb816bc8094e3498867a076b9a8b25ad806c17046b\" successfully"
Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.155383031Z" level=info msg="StopPodSandbox for \"82faddd4be9818c16208c8eb816bc8094e3498867a076b9a8b25ad806c17046b\" returns successfully"
Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.156105617Z" level=info msg="RemovePodSandbox for \"82faddd4be9818c16208c8eb816bc8094e3498867a076b9a8b25ad806c17046b\""
Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.156147748Z" level=info msg="Forcibly stopping sandbox \"82faddd4be9818c16208c8eb816bc8094e3498867a076b9a8b25ad806c17046b\""
Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.164966291Z" level=info msg="TearDown network for sandbox \"82faddd4be9818c16208c8eb816bc8094e3498867a076b9a8b25ad806c17046b\" successfully"
Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.172749626Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82faddd4be9818c16208c8eb816bc8094e3498867a076b9a8b25ad806c17046b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.172906712Z" level=info msg="RemovePodSandbox \"82faddd4be9818c16208c8eb816bc8094e3498867a076b9a8b25ad806c17046b\" returns successfully"
==> coredns [6c2b94ff7a984f8d04d8b498ee95608c149d7140ff36d16f624705cc2eb30d11] <==
[INFO] 10.244.0.10:50276 - 33321 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000076722s
[INFO] 10.244.0.10:50276 - 18308 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001768845s
[INFO] 10.244.0.10:50276 - 65302 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002485845s
[INFO] 10.244.0.10:50276 - 27937 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000187376s
[INFO] 10.244.0.10:50276 - 56698 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000070567s
[INFO] 10.244.0.10:40412 - 31062 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000102032s
[INFO] 10.244.0.10:40412 - 30832 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000036388s
[INFO] 10.244.0.10:48233 - 25595 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000086764s
[INFO] 10.244.0.10:48233 - 25150 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000032392s
[INFO] 10.244.0.10:35157 - 23956 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048834s
[INFO] 10.244.0.10:35157 - 23767 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000056177s
[INFO] 10.244.0.10:50403 - 11654 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00139246s
[INFO] 10.244.0.10:50403 - 11843 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002251989s
[INFO] 10.244.0.10:35526 - 48709 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077393s
[INFO] 10.244.0.10:35526 - 49129 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000050934s
[INFO] 10.244.0.24:35068 - 45868 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000195114s
[INFO] 10.244.0.24:47573 - 40190 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000081824s
[INFO] 10.244.0.24:36383 - 47688 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000083424s
[INFO] 10.244.0.24:40112 - 20652 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117818s
[INFO] 10.244.0.24:41941 - 5454 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000203089s
[INFO] 10.244.0.24:48138 - 10559 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117236s
[INFO] 10.244.0.24:59564 - 6573 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002234458s
[INFO] 10.244.0.24:50982 - 5978 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002146751s
[INFO] 10.244.0.24:43387 - 60009 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001591699s
[INFO] 10.244.0.24:44938 - 62531 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001659117s
==> describe nodes <==
Name: addons-246349
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=addons-246349
kubernetes.io/os=linux
minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
minikube.k8s.io/name=addons-246349
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_10_08T18_02_20_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-246349
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-246349"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 08 Oct 2024 18:02:17 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-246349
AcquireTime: <unset>
RenewTime: Tue, 08 Oct 2024 18:07:15 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 08 Oct 2024 18:04:22 +0000 Tue, 08 Oct 2024 18:02:14 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 08 Oct 2024 18:04:22 +0000 Tue, 08 Oct 2024 18:02:14 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 08 Oct 2024 18:04:22 +0000 Tue, 08 Oct 2024 18:02:14 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 08 Oct 2024 18:04:22 +0000 Tue, 08 Oct 2024 18:02:18 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-246349
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
System Info:
Machine ID: cbf405d2c0144d0694aae0a7fa67238d
System UUID: 720ab22a-5498-4c8a-9cc4-cacf12496aa0
Boot ID: b951cf46-640a-45c2-9395-0fcf341c803c
Kernel Version: 5.15.0-1070-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.22
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (27 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default cloud-spanner-emulator-5b584cc74-b4d47 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m56s
gadget gadget-nff5l 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m52s
gcp-auth gcp-auth-89d5ffd79-bwm7k 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m24s
ingress-nginx ingress-nginx-controller-bc57996ff-rx4wd 100m (5%) 0 (0%) 90Mi (1%) 0 (0%) 4m51s
kube-system coredns-7c65d6cfc9-vxnx7 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 4m59s
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m49s
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m49s
kube-system csi-hostpathplugin-l5bvz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m49s
kube-system etcd-addons-246349 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 5m4s
kube-system kindnet-xj6p9 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 4m59s
kube-system kube-apiserver-addons-246349 250m (12%) 0 (0%) 0 (0%) 0 (0%) 5m4s
kube-system kube-controller-manager-addons-246349 200m (10%) 0 (0%) 0 (0%) 0 (0%) 5m4s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m55s
kube-system kube-proxy-pjcqn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m59s
kube-system kube-scheduler-addons-246349 100m (5%) 0 (0%) 0 (0%) 0 (0%) 5m4s
kube-system metrics-server-84c5f94fbc-4g8nz 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 4m53s
kube-system nvidia-device-plugin-daemonset-5d4vx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m56s
kube-system registry-66c9cd494c-8tr5n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m55s
kube-system registry-proxy-827n9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m55s
kube-system snapshot-controller-56fcc65765-8d9jf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m53s
kube-system snapshot-controller-56fcc65765-mrrwj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m53s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m54s
local-path-storage local-path-provisioner-86d989889c-wkjcc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m53s
volcano-system volcano-admission-5874dfdd79-hpn22 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m50s
volcano-system volcano-controllers-789ffc5785-qbzrm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m49s
volcano-system volcano-scheduler-6c9778cbdf-65r4d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m49s
yakd-dashboard yakd-dashboard-67d98fc6b-8ztv6 0 (0%) 0 (0%) 128Mi (1%) 256Mi (3%) 4m55s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1050m (52%) 100m (5%)
memory 638Mi (8%) 476Mi (6%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m57s kube-proxy
Normal NodeAllocatableEnforced 5m11s kubelet Updated Node Allocatable limit across pods
Warning CgroupV1 5m11s kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeHasSufficientMemory 5m11s (x8 over 5m11s) kubelet Node addons-246349 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m11s (x7 over 5m11s) kubelet Node addons-246349 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m11s (x7 over 5m11s) kubelet Node addons-246349 status is now: NodeHasSufficientPID
Normal Starting 5m11s kubelet Starting kubelet.
Normal Starting 5m5s kubelet Starting kubelet.
Warning CgroupV1 5m5s kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 5m4s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 5m4s kubelet Node addons-246349 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m4s kubelet Node addons-246349 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m4s kubelet Node addons-246349 status is now: NodeHasSufficientPID
Normal RegisteredNode 5m node-controller Node addons-246349 event: Registered Node addons-246349 in Controller
==> dmesg <==
[Oct 8 16:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.015697] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.471811] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.053322] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
[ +0.014987] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
[ +0.650369] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.406873] kauditd_printk_skb: 36 callbacks suppressed
[Oct 8 16:57] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Oct 8 17:31] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
==> etcd [51513c8a85f774b4c758f44d804304074d74a4df6b212641ab98897b1cc8d08c] <==
{"level":"info","ts":"2024-10-08T18:02:14.131075Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2024-10-08T18:02:14.122622Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-10-08T18:02:14.131178Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-10-08T18:02:14.130968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
{"level":"info","ts":"2024-10-08T18:02:14.131369Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2024-10-08T18:02:14.747438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2024-10-08T18:02:14.747667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2024-10-08T18:02:14.747793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2024-10-08T18:02:14.747907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2024-10-08T18:02:14.747987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-10-08T18:02:14.748117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2024-10-08T18:02:14.748192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-10-08T18:02:14.751615Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-10-08T18:02:14.753072Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-246349 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2024-10-08T18:02:14.753336Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-10-08T18:02:14.753805Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-10-08T18:02:14.754129Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-10-08T18:02:14.754224Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-10-08T18:02:14.754929Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-10-08T18:02:14.762679Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-10-08T18:02:14.757607Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-10-08T18:02:14.757649Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-10-08T18:02:14.789900Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-10-08T18:02:14.790035Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-10-08T18:02:14.794162Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
==> gcp-auth [980521b58871663597d7aa280874de03bd381fd57a5c0c9411d8b78c1425c3a3] <==
2024/10/08 18:04:03 GCP Auth Webhook started!
2024/10/08 18:04:21 Ready to marshal response ...
2024/10/08 18:04:21 Ready to write response ...
2024/10/08 18:04:22 Ready to marshal response ...
2024/10/08 18:04:22 Ready to write response ...
==> kernel <==
18:07:24 up 1:49, 0 users, load average: 0.56, 1.32, 1.94
Linux addons-246349 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [a0ee92e9cb26fec74d7686d78194d24e054266eef9cd829964d4e76a5ca41393] <==
I1008 18:05:16.911067 1 main.go:299] handling current node
I1008 18:05:26.910646 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I1008 18:05:26.910678 1 main.go:299] handling current node
I1008 18:05:36.916220 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I1008 18:05:36.916257 1 main.go:299] handling current node
I1008 18:05:46.914999 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I1008 18:05:46.915034 1 main.go:299] handling current node
I1008 18:05:56.915129 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I1008 18:05:56.915163 1 main.go:299] handling current node
I1008 18:06:06.913116 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I1008 18:06:06.913154 1 main.go:299] handling current node
I1008 18:06:16.913724 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I1008 18:06:16.913761 1 main.go:299] handling current node
I1008 18:06:26.911012 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I1008 18:06:26.911046 1 main.go:299] handling current node
I1008 18:06:36.913983 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I1008 18:06:36.914019 1 main.go:299] handling current node
I1008 18:06:46.919745 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I1008 18:06:46.919786 1 main.go:299] handling current node
I1008 18:06:56.910606 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I1008 18:06:56.910642 1 main.go:299] handling current node
I1008 18:07:06.916607 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I1008 18:07:06.916640 1 main.go:299] handling current node
I1008 18:07:16.917744 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I1008 18:07:16.917776 1 main.go:299] handling current node
==> kube-apiserver [931e105ba9202a3c6933ff2e79e14d2fc2b27a2c0ad75e0a0e1f4b5fde19be28] <==
I1008 18:03:00.569493 1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
W1008 18:03:08.549855 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.133.74:443: connect: connection refused
E1008 18:03:08.549896 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.133.74:443: connect: connection refused" logger="UnhandledError"
W1008 18:03:08.551699 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
W1008 18:03:08.624681 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.133.74:443: connect: connection refused
E1008 18:03:08.624721 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.133.74:443: connect: connection refused" logger="UnhandledError"
W1008 18:03:08.628509 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
W1008 18:03:16.783295 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
W1008 18:03:17.829990 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
W1008 18:03:18.880333 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
W1008 18:03:19.629353 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.133.74:443: connect: connection refused
E1008 18:03:19.629393 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.133.74:443: connect: connection refused" logger="UnhandledError"
W1008 18:03:19.631281 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
W1008 18:03:19.931644 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
W1008 18:03:20.952917 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
W1008 18:03:22.048375 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
W1008 18:03:23.123392 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
W1008 18:03:40.562335 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.133.74:443: connect: connection refused
E1008 18:03:40.562370 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.133.74:443: connect: connection refused" logger="UnhandledError"
W1008 18:03:40.640146 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.133.74:443: connect: connection refused
E1008 18:03:40.640184 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.133.74:443: connect: connection refused" logger="UnhandledError"
W1008 18:04:00.595819 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.133.74:443: connect: connection refused
E1008 18:04:00.595861 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.133.74:443: connect: connection refused" logger="UnhandledError"
I1008 18:04:21.839096 1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
I1008 18:04:21.901946 1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
==> kube-controller-manager [84783b5587d4d50f5c1f80f9531cce51c0d0b992e6f9a5d70c1c3c4530f38fa9] <==
I1008 18:03:42.677965 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
I1008 18:03:42.937960 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
I1008 18:03:43.681708 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="80.84µs"
I1008 18:03:43.896889 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
I1008 18:03:43.944904 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
I1008 18:03:43.958416 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
I1008 18:03:43.965487 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
I1008 18:03:44.903264 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
I1008 18:03:44.911301 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
I1008 18:03:44.917601 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
I1008 18:03:51.763846 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-246349"
I1008 18:03:57.926637 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="13.592096ms"
I1008 18:03:57.928564 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="77.362µs"
I1008 18:04:00.613316 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="19.951748ms"
I1008 18:04:00.644837 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="31.469199ms"
I1008 18:04:00.645073 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="188.493µs"
I1008 18:04:00.657559 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="65.932µs"
I1008 18:04:04.754020 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="13.726429ms"
I1008 18:04:04.754861 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="30.726µs"
I1008 18:04:13.018116 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
I1008 18:04:13.056585 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
I1008 18:04:14.009566 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
I1008 18:04:14.038807 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
I1008 18:04:21.537561 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
I1008 18:04:22.149737 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-246349"
==> kube-proxy [05eb020f1ea0a837a01cd0b3c03976b8f6e26076bd75e79a502cb8361daa06c8] <==
I1008 18:02:26.551511 1 server_linux.go:66] "Using iptables proxy"
I1008 18:02:26.657733 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E1008 18:02:26.657819 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1008 18:02:26.700843 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1008 18:02:26.700911 1 server_linux.go:169] "Using iptables Proxier"
I1008 18:02:26.705351 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1008 18:02:26.705930 1 server.go:483] "Version info" version="v1.31.1"
I1008 18:02:26.705947 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1008 18:02:26.707886 1 config.go:199] "Starting service config controller"
I1008 18:02:26.707908 1 shared_informer.go:313] Waiting for caches to sync for service config
I1008 18:02:26.707926 1 config.go:105] "Starting endpoint slice config controller"
I1008 18:02:26.707930 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I1008 18:02:26.708325 1 config.go:328] "Starting node config controller"
I1008 18:02:26.708332 1 shared_informer.go:313] Waiting for caches to sync for node config
I1008 18:02:26.808500 1 shared_informer.go:320] Caches are synced for node config
I1008 18:02:26.808511 1 shared_informer.go:320] Caches are synced for service config
I1008 18:02:26.808535 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [7bf20a531418a403928d11a062dc813cc7f3428d4c13cb3bc97ffb2cbfb60f72] <==
W1008 18:02:17.691850 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1008 18:02:17.692293 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1008 18:02:18.520247 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E1008 18:02:18.520303 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1008 18:02:18.521368 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1008 18:02:18.521580 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W1008 18:02:18.548474 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1008 18:02:18.548736 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W1008 18:02:18.550177 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1008 18:02:18.550210 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W1008 18:02:18.550457 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1008 18:02:18.550520 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1008 18:02:18.554626 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E1008 18:02:18.554665 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W1008 18:02:18.577449 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1008 18:02:18.577723 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1008 18:02:18.589789 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1008 18:02:18.590048 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1008 18:02:18.667706 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1008 18:02:18.667941 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W1008 18:02:18.821734 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1008 18:02:18.821959 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1008 18:02:18.923728 1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1008 18:02:18.923771 1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
I1008 18:02:20.676165 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Oct 08 18:03:44 addons-246349 kubelet[1500]: I1008 18:03:44.666809 1500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7992de2266a84aabb6ae18ef8e2d29a1cdc10237a7800b9af536693cffd5a8f"
Oct 08 18:03:54 addons-246349 kubelet[1500]: I1008 18:03:54.986749 1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5d4vx" secret="" err="secret \"gcp-auth\" not found"
Oct 08 18:04:00 addons-246349 kubelet[1500]: E1008 18:04:00.622231 1500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d01453aa-f430-491d-8ae4-e6894b954954" containerName="patch"
Oct 08 18:04:00 addons-246349 kubelet[1500]: E1008 18:04:00.622312 1500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c9137c7-0886-4c7b-9d0e-c3005aa0d173" containerName="create"
Oct 08 18:04:00 addons-246349 kubelet[1500]: I1008 18:04:00.622368 1500 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c9137c7-0886-4c7b-9d0e-c3005aa0d173" containerName="create"
Oct 08 18:04:00 addons-246349 kubelet[1500]: I1008 18:04:00.622378 1500 memory_manager.go:354] "RemoveStaleState removing state" podUID="d01453aa-f430-491d-8ae4-e6894b954954" containerName="patch"
Oct 08 18:04:00 addons-246349 kubelet[1500]: I1008 18:04:00.689118 1500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4f215c74-d0c2-4dac-84bf-d2ccdc093974-gcp-creds\") pod \"gcp-auth-89d5ffd79-bwm7k\" (UID: \"4f215c74-d0c2-4dac-84bf-d2ccdc093974\") " pod="gcp-auth/gcp-auth-89d5ffd79-bwm7k"
Oct 08 18:04:00 addons-246349 kubelet[1500]: I1008 18:04:00.689179 1500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f215c74-d0c2-4dac-84bf-d2ccdc093974-webhook-certs\") pod \"gcp-auth-89d5ffd79-bwm7k\" (UID: \"4f215c74-d0c2-4dac-84bf-d2ccdc093974\") " pod="gcp-auth/gcp-auth-89d5ffd79-bwm7k"
Oct 08 18:04:00 addons-246349 kubelet[1500]: I1008 18:04:00.689209 1500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn9gw\" (UniqueName: \"kubernetes.io/projected/4f215c74-d0c2-4dac-84bf-d2ccdc093974-kube-api-access-bn9gw\") pod \"gcp-auth-89d5ffd79-bwm7k\" (UID: \"4f215c74-d0c2-4dac-84bf-d2ccdc093974\") " pod="gcp-auth/gcp-auth-89d5ffd79-bwm7k"
Oct 08 18:04:00 addons-246349 kubelet[1500]: I1008 18:04:00.689244 1500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-project\" (UniqueName: \"kubernetes.io/host-path/4f215c74-d0c2-4dac-84bf-d2ccdc093974-gcp-project\") pod \"gcp-auth-89d5ffd79-bwm7k\" (UID: \"4f215c74-d0c2-4dac-84bf-d2ccdc093974\") " pod="gcp-auth/gcp-auth-89d5ffd79-bwm7k"
Oct 08 18:04:00 addons-246349 kubelet[1500]: I1008 18:04:00.986627 1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-8tr5n" secret="" err="secret \"gcp-auth\" not found"
Oct 08 18:04:13 addons-246349 kubelet[1500]: I1008 18:04:13.035289 1500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-89d5ffd79-bwm7k" podStartSLOduration=10.205053457 podStartE2EDuration="13.035268123s" podCreationTimestamp="2024-10-08 18:04:00 +0000 UTC" firstStartedPulling="2024-10-08 18:04:01.031769041 +0000 UTC m=+101.150799063" lastFinishedPulling="2024-10-08 18:04:03.861983707 +0000 UTC m=+103.981013729" observedRunningTime="2024-10-08 18:04:04.744821384 +0000 UTC m=+104.863851397" watchObservedRunningTime="2024-10-08 18:04:13.035268123 +0000 UTC m=+113.154298137"
Oct 08 18:04:13 addons-246349 kubelet[1500]: I1008 18:04:13.990385 1500 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c9137c7-0886-4c7b-9d0e-c3005aa0d173" path="/var/lib/kubelet/pods/2c9137c7-0886-4c7b-9d0e-c3005aa0d173/volumes"
Oct 08 18:04:15 addons-246349 kubelet[1500]: I1008 18:04:15.990654 1500 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d01453aa-f430-491d-8ae4-e6894b954954" path="/var/lib/kubelet/pods/d01453aa-f430-491d-8ae4-e6894b954954/volumes"
Oct 08 18:04:17 addons-246349 kubelet[1500]: I1008 18:04:17.986521 1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-827n9" secret="" err="secret \"gcp-auth\" not found"
Oct 08 18:04:20 addons-246349 kubelet[1500]: I1008 18:04:20.055017 1500 scope.go:117] "RemoveContainer" containerID="9a257d92e4c54f9a9da85f8489c88f371c8db9e8a4da114c886b42f4d58e2207"
Oct 08 18:04:20 addons-246349 kubelet[1500]: I1008 18:04:20.062682 1500 scope.go:117] "RemoveContainer" containerID="6fdab89be6076baa06806fcb998104b225420d043ec4fbe4fc036f80d01168ac"
Oct 08 18:04:21 addons-246349 kubelet[1500]: I1008 18:04:21.992538 1500 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7826b1a6-cf6b-4426-a257-d67d2b32e54d" path="/var/lib/kubelet/pods/7826b1a6-cf6b-4426-a257-d67d2b32e54d/volumes"
Oct 08 18:05:05 addons-246349 kubelet[1500]: I1008 18:05:05.987471 1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-8tr5n" secret="" err="secret \"gcp-auth\" not found"
Oct 08 18:05:10 addons-246349 kubelet[1500]: I1008 18:05:10.987453 1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5d4vx" secret="" err="secret \"gcp-auth\" not found"
Oct 08 18:05:20 addons-246349 kubelet[1500]: I1008 18:05:20.137269 1500 scope.go:117] "RemoveContainer" containerID="5803c163e00cf0eee0bd350a8b9db4f15ec0256048cdf96f0db0def1f72dea5a"
Oct 08 18:05:20 addons-246349 kubelet[1500]: I1008 18:05:20.987034 1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-827n9" secret="" err="secret \"gcp-auth\" not found"
Oct 08 18:06:22 addons-246349 kubelet[1500]: I1008 18:06:22.986807 1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-827n9" secret="" err="secret \"gcp-auth\" not found"
Oct 08 18:06:23 addons-246349 kubelet[1500]: I1008 18:06:23.986916 1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-8tr5n" secret="" err="secret \"gcp-auth\" not found"
Oct 08 18:06:37 addons-246349 kubelet[1500]: I1008 18:06:37.987384 1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5d4vx" secret="" err="secret \"gcp-auth\" not found"
==> storage-provisioner [7691651a9469175aa252f47f0093581ef43db330ceb7c331793947033e722a48] <==
I1008 18:02:31.070261 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1008 18:02:31.081441 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1008 18:02:31.081492 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1008 18:02:31.094192 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1008 18:02:31.096785 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f9f89ce-996e-4cef-a206-5313a963ed8e", APIVersion:"v1", ResourceVersion:"560", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-246349_44e3839c-233d-420a-ab18-92900568c363 became leader
I1008 18:02:31.096904 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-246349_44e3839c-233d-420a-ab18-92900568c363!
I1008 18:02:31.197436 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-246349_44e3839c-233d-420a-ab18-92900568c363!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-246349 -n addons-246349
helpers_test.go:261: (dbg) Run: kubectl --context addons-246349 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-f8ktk ingress-nginx-admission-patch-nm6hq test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-246349 describe pod ingress-nginx-admission-create-f8ktk ingress-nginx-admission-patch-nm6hq test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-246349 describe pod ingress-nginx-admission-create-f8ktk ingress-nginx-admission-patch-nm6hq test-job-nginx-0: exit status 1 (98.049064ms)
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-f8ktk" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-nm6hq" not found
Error from server (NotFound): pods "test-job-nginx-0" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-246349 describe pod ingress-nginx-admission-create-f8ktk ingress-nginx-admission-patch-nm6hq test-job-nginx-0: exit status 1
addons_test.go:979: (dbg) Run: out/minikube-linux-arm64 -p addons-246349 addons disable volcano --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-246349 addons disable volcano --alsologtostderr -v=1: (11.301608714s)
--- FAIL: TestAddons/serial/Volcano (211.21s)