=== RUN TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 45.633542ms
addons_test.go:897: volcano-scheduler stabilized in 47.496969ms
addons_test.go:913: volcano-controller stabilized in 52.205242ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-6nckj" [9be65c16-23bc-42cd-be09-6a1529232f13] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003515504s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-xlm2l" [b69a5d75-fb60-409c-905f-3a5c95cfe0c4] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004401912s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-gm6cq" [417b3839-6552-48a5-84cb-035140b24fcc] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004279629s
addons_test.go:932: (dbg) Run: kubectl --context addons-858013 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run: kubectl --context addons-858013 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run: kubectl --context addons-858013 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [1092fae1-94bb-4684-bc06-3d54692af512] Pending
helpers_test.go:344: "test-job-nginx-0" [1092fae1-94bb-4684-bc06-3d54692af512] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-858013 -n addons-858013
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-08-15 23:11:43.213612278 +0000 UTC m=+368.017005335
addons_test.go:964: (dbg) Run: kubectl --context addons-858013 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-858013 describe po test-job-nginx-0 -n my-volcano:
Name: test-job-nginx-0
Namespace: my-volcano
Priority: 0
Service Account: default
Node: <none>
Labels: volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations: scheduling.k8s.io/group-name: test-job-daf5dee2-f26d-4af1-9384-ec8ef3c8201e
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status: Pending
IP:
IPs: <none>
Controlled By: Job/test-job
Containers:
nginx:
Image: nginx:latest
Port: <none>
Host Port: <none>
Command:
sleep
10m
Limits:
cpu: 1
Requests:
cpu: 1
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xplg2 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-xplg2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m59s volcano 0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run: kubectl --context addons-858013 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-858013 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-858013
helpers_test.go:235: (dbg) docker inspect addons-858013:
-- stdout --
[
{
"Id": "b7511ac829da2099bbaf8c58270d1c12bf8caa1ba742928bbd45f8eccecb12e0",
"Created": "2024-08-15T23:06:20.523173334Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1433401,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-08-15T23:06:20.673330768Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
"ResolvConfPath": "/var/lib/docker/containers/b7511ac829da2099bbaf8c58270d1c12bf8caa1ba742928bbd45f8eccecb12e0/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/b7511ac829da2099bbaf8c58270d1c12bf8caa1ba742928bbd45f8eccecb12e0/hostname",
"HostsPath": "/var/lib/docker/containers/b7511ac829da2099bbaf8c58270d1c12bf8caa1ba742928bbd45f8eccecb12e0/hosts",
"LogPath": "/var/lib/docker/containers/b7511ac829da2099bbaf8c58270d1c12bf8caa1ba742928bbd45f8eccecb12e0/b7511ac829da2099bbaf8c58270d1c12bf8caa1ba742928bbd45f8eccecb12e0-json.log",
"Name": "/addons-858013",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-858013:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-858013",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/754b4cf7b5cd28f792f5e151188d7e717fe42b8a2796a7706e324b3805d7bdfc-init/diff:/var/lib/docker/overlay2/e428b765a276f13f40b5e5004929d0f757eb41eeeb7bde16d60cca7148ca82dd/diff",
"MergedDir": "/var/lib/docker/overlay2/754b4cf7b5cd28f792f5e151188d7e717fe42b8a2796a7706e324b3805d7bdfc/merged",
"UpperDir": "/var/lib/docker/overlay2/754b4cf7b5cd28f792f5e151188d7e717fe42b8a2796a7706e324b3805d7bdfc/diff",
"WorkDir": "/var/lib/docker/overlay2/754b4cf7b5cd28f792f5e151188d7e717fe42b8a2796a7706e324b3805d7bdfc/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "addons-858013",
"Source": "/var/lib/docker/volumes/addons-858013/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "addons-858013",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-858013",
"name.minikube.sigs.k8s.io": "addons-858013",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "49aa5b1f1a7fe9efded84a1b8f2a2b5bce4ca19fe6aef7abdd2ede13845046f2",
"SandboxKey": "/var/run/docker/netns/49aa5b1f1a7f",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34617"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34618"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34621"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34619"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34620"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-858013": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "a87dd743274275323c5e864d355b2fa23b9a4b83107969d26194a74d42156598",
"EndpointID": "2249c98c4fc744a27cc498e599e82357b910a33d37d2517eb103c571cb878ed6",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-858013",
"b7511ac829da"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p addons-858013 -n addons-858013
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p addons-858013 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-858013 logs -n 25: (1.69380423s)
helpers_test.go:252: TestAddons/serial/Volcano logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-840763 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | |
| | -p download-only-840763 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | --all | minikube | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:05 UTC |
| delete | -p download-only-840763 | download-only-840763 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:05 UTC |
| start | -o=json --download-only | download-only-636124 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | |
| | -p download-only-636124 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.0 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | --all | minikube | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:05 UTC |
| delete | -p download-only-636124 | download-only-636124 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:05 UTC |
| delete | -p download-only-840763 | download-only-840763 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:05 UTC |
| delete | -p download-only-636124 | download-only-636124 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:05 UTC |
| start | --download-only -p | download-docker-280377 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | |
| | download-docker-280377 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p download-docker-280377 | download-docker-280377 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:05 UTC |
| start | --download-only -p | binary-mirror-984587 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | |
| | binary-mirror-984587 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:40775 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p binary-mirror-984587 | binary-mirror-984587 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:05 UTC |
| addons | enable dashboard -p | addons-858013 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | |
| | addons-858013 | | | | | |
| addons | disable dashboard -p | addons-858013 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | |
| | addons-858013 | | | | | |
| start | -p addons-858013 --wait=true | addons-858013 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:08 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/08/15 23:05:56
Running on machine: ip-172-31-31-251
Binary: Built with gc go1.22.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0815 23:05:56.531160 1432906 out.go:345] Setting OutFile to fd 1 ...
I0815 23:05:56.531327 1432906 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 23:05:56.531356 1432906 out.go:358] Setting ErrFile to fd 2...
I0815 23:05:56.531377 1432906 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 23:05:56.531618 1432906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-1426744/.minikube/bin
I0815 23:05:56.532065 1432906 out.go:352] Setting JSON to false
I0815 23:05:56.532977 1432906 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28100,"bootTime":1723735057,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I0815 23:05:56.533071 1432906 start.go:139] virtualization:
I0815 23:05:56.535384 1432906 out.go:177] * [addons-858013] minikube v1.33.1 on Ubuntu 20.04 (arm64)
I0815 23:05:56.538181 1432906 out.go:177] - MINIKUBE_LOCATION=19452
I0815 23:05:56.538303 1432906 notify.go:220] Checking for updates...
I0815 23:05:56.542954 1432906 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0815 23:05:56.545039 1432906 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19452-1426744/kubeconfig
I0815 23:05:56.547014 1432906 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-1426744/.minikube
I0815 23:05:56.549110 1432906 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0815 23:05:56.551076 1432906 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0815 23:05:56.553346 1432906 driver.go:392] Setting default libvirt URI to qemu:///system
I0815 23:05:56.592008 1432906 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
I0815 23:05:56.592147 1432906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0815 23:05:56.654688 1432906 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-15 23:05:56.645217395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
I0815 23:05:56.654800 1432906 docker.go:307] overlay module found
I0815 23:05:56.657288 1432906 out.go:177] * Using the docker driver based on user configuration
I0815 23:05:56.659066 1432906 start.go:297] selected driver: docker
I0815 23:05:56.659087 1432906 start.go:901] validating driver "docker" against <nil>
I0815 23:05:56.659101 1432906 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0815 23:05:56.659724 1432906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0815 23:05:56.722858 1432906 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-15 23:05:56.713989358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
I0815 23:05:56.723027 1432906 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0815 23:05:56.723252 1432906 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0815 23:05:56.725924 1432906 out.go:177] * Using Docker driver with root privileges
I0815 23:05:56.728132 1432906 cni.go:84] Creating CNI manager for ""
I0815 23:05:56.728151 1432906 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0815 23:05:56.728167 1432906 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0815 23:05:56.728266 1432906 start.go:340] cluster config:
{Name:addons-858013 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-858013 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0815 23:05:56.730694 1432906 out.go:177] * Starting "addons-858013" primary control-plane node in "addons-858013" cluster
I0815 23:05:56.733588 1432906 cache.go:121] Beginning downloading kic base image for docker with containerd
I0815 23:05:56.735787 1432906 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
I0815 23:05:56.737974 1432906 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
I0815 23:05:56.738025 1432906 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-1426744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
I0815 23:05:56.738045 1432906 cache.go:56] Caching tarball of preloaded images
I0815 23:05:56.738075 1432906 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
I0815 23:05:56.738139 1432906 preload.go:172] Found /home/jenkins/minikube-integration/19452-1426744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0815 23:05:56.738150 1432906 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
I0815 23:05:56.738511 1432906 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/config.json ...
I0815 23:05:56.738590 1432906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/config.json: {Name:mk73cca01801eabfd3cee3dda0d685d9f48344cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0815 23:05:56.759077 1432906 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
I0815 23:05:56.759195 1432906 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
I0815 23:05:56.759215 1432906 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
I0815 23:05:56.759220 1432906 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
I0815 23:05:56.759227 1432906 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
I0815 23:05:56.759232 1432906 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
I0815 23:06:13.473091 1432906 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
I0815 23:06:13.473147 1432906 cache.go:194] Successfully downloaded all kic artifacts
I0815 23:06:13.473188 1432906 start.go:360] acquireMachinesLock for addons-858013: {Name:mkc603032d7a7b3709e58f7d8abcb532abb04308 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0815 23:06:13.473321 1432906 start.go:364] duration metric: took 107.627µs to acquireMachinesLock for "addons-858013"
I0815 23:06:13.473349 1432906 start.go:93] Provisioning new machine with config: &{Name:addons-858013 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-858013 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0815 23:06:13.473436 1432906 start.go:125] createHost starting for "" (driver="docker")
I0815 23:06:13.476255 1432906 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0815 23:06:13.476561 1432906 start.go:159] libmachine.API.Create for "addons-858013" (driver="docker")
I0815 23:06:13.476598 1432906 client.go:168] LocalClient.Create starting
I0815 23:06:13.476706 1432906 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19452-1426744/.minikube/certs/ca.pem
I0815 23:06:13.997987 1432906 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19452-1426744/.minikube/certs/cert.pem
I0815 23:06:14.160577 1432906 cli_runner.go:164] Run: docker network inspect addons-858013 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0815 23:06:14.178037 1432906 cli_runner.go:211] docker network inspect addons-858013 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0815 23:06:14.178116 1432906 network_create.go:284] running [docker network inspect addons-858013] to gather additional debugging logs...
I0815 23:06:14.178135 1432906 cli_runner.go:164] Run: docker network inspect addons-858013
W0815 23:06:14.192562 1432906 cli_runner.go:211] docker network inspect addons-858013 returned with exit code 1
I0815 23:06:14.192595 1432906 network_create.go:287] error running [docker network inspect addons-858013]: docker network inspect addons-858013: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-858013 not found
I0815 23:06:14.192609 1432906 network_create.go:289] output of [docker network inspect addons-858013]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-858013 not found
** /stderr **
I0815 23:06:14.192703 1432906 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0815 23:06:14.206701 1432906 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017ade50}
I0815 23:06:14.206741 1432906 network_create.go:124] attempt to create docker network addons-858013 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0815 23:06:14.206800 1432906 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-858013 addons-858013
I0815 23:06:14.277614 1432906 network_create.go:108] docker network addons-858013 192.168.49.0/24 created
I0815 23:06:14.277649 1432906 kic.go:121] calculated static IP "192.168.49.2" for the "addons-858013" container
I0815 23:06:14.277732 1432906 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0815 23:06:14.291094 1432906 cli_runner.go:164] Run: docker volume create addons-858013 --label name.minikube.sigs.k8s.io=addons-858013 --label created_by.minikube.sigs.k8s.io=true
I0815 23:06:14.308752 1432906 oci.go:103] Successfully created a docker volume addons-858013
I0815 23:06:14.308839 1432906 cli_runner.go:164] Run: docker run --rm --name addons-858013-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-858013 --entrypoint /usr/bin/test -v addons-858013:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
I0815 23:06:16.352680 1432906 cli_runner.go:217] Completed: docker run --rm --name addons-858013-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-858013 --entrypoint /usr/bin/test -v addons-858013:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib: (2.04380464s)
I0815 23:06:16.352711 1432906 oci.go:107] Successfully prepared a docker volume addons-858013
I0815 23:06:16.352729 1432906 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
I0815 23:06:16.352749 1432906 kic.go:194] Starting extracting preloaded images to volume ...
I0815 23:06:16.352846 1432906 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19452-1426744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-858013:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
I0815 23:06:20.457275 1432906 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19452-1426744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-858013:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.104383766s)
I0815 23:06:20.457309 1432906 kic.go:203] duration metric: took 4.104556607s to extract preloaded images to volume ...
W0815 23:06:20.457455 1432906 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0815 23:06:20.457572 1432906 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0815 23:06:20.508487 1432906 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-858013 --name addons-858013 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-858013 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-858013 --network addons-858013 --ip 192.168.49.2 --volume addons-858013:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
I0815 23:06:20.850152 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Running}}
I0815 23:06:20.871828 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:20.896712 1432906 cli_runner.go:164] Run: docker exec addons-858013 stat /var/lib/dpkg/alternatives/iptables
I0815 23:06:20.955712 1432906 oci.go:144] the created container "addons-858013" has a running status.
I0815 23:06:20.955749 1432906 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa...
I0815 23:06:21.596239 1432906 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0815 23:06:21.624679 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:21.647394 1432906 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0815 23:06:21.647413 1432906 kic_runner.go:114] Args: [docker exec --privileged addons-858013 chown docker:docker /home/docker/.ssh/authorized_keys]
I0815 23:06:21.716116 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:21.747538 1432906 machine.go:93] provisionDockerMachine start ...
I0815 23:06:21.747784 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:21.779564 1432906 main.go:141] libmachine: Using SSH client type: native
I0815 23:06:21.779970 1432906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil> [] 0s} 127.0.0.1 34617 <nil> <nil>}
I0815 23:06:21.779989 1432906 main.go:141] libmachine: About to run SSH command:
hostname
I0815 23:06:21.924623 1432906 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-858013
I0815 23:06:21.924647 1432906 ubuntu.go:169] provisioning hostname "addons-858013"
I0815 23:06:21.924717 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:21.945643 1432906 main.go:141] libmachine: Using SSH client type: native
I0815 23:06:21.945966 1432906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil> [] 0s} 127.0.0.1 34617 <nil> <nil>}
I0815 23:06:21.945983 1432906 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-858013 && echo "addons-858013" | sudo tee /etc/hostname
I0815 23:06:22.094413 1432906 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-858013
I0815 23:06:22.094492 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:22.111809 1432906 main.go:141] libmachine: Using SSH client type: native
I0815 23:06:22.112060 1432906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil> [] 0s} 127.0.0.1 34617 <nil> <nil>}
I0815 23:06:22.112081 1432906 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-858013' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-858013/g' /etc/hosts;
else
echo '127.0.1.1 addons-858013' | sudo tee -a /etc/hosts;
fi
fi
I0815 23:06:22.241051 1432906 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0815 23:06:22.241082 1432906 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19452-1426744/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-1426744/.minikube}
I0815 23:06:22.241116 1432906 ubuntu.go:177] setting up certificates
I0815 23:06:22.241160 1432906 provision.go:84] configureAuth start
I0815 23:06:22.241230 1432906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-858013
I0815 23:06:22.257625 1432906 provision.go:143] copyHostCerts
I0815 23:06:22.257709 1432906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-1426744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-1426744/.minikube/ca.pem (1078 bytes)
I0815 23:06:22.257830 1432906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-1426744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-1426744/.minikube/cert.pem (1123 bytes)
I0815 23:06:22.257894 1432906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-1426744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-1426744/.minikube/key.pem (1679 bytes)
I0815 23:06:22.257946 1432906 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-1426744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-1426744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-1426744/.minikube/certs/ca-key.pem org=jenkins.addons-858013 san=[127.0.0.1 192.168.49.2 addons-858013 localhost minikube]
I0815 23:06:22.567242 1432906 provision.go:177] copyRemoteCerts
I0815 23:06:22.567357 1432906 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0815 23:06:22.567429 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:22.583998 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:22.682168 1432906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-1426744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0815 23:06:22.706705 1432906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-1426744/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0815 23:06:22.730628 1432906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-1426744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0815 23:06:22.754358 1432906 provision.go:87] duration metric: took 513.177507ms to configureAuth
I0815 23:06:22.754384 1432906 ubuntu.go:193] setting minikube options for container-runtime
I0815 23:06:22.754565 1432906 config.go:182] Loaded profile config "addons-858013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 23:06:22.754580 1432906 machine.go:96] duration metric: took 1.007020236s to provisionDockerMachine
I0815 23:06:22.754588 1432906 client.go:171] duration metric: took 9.277981738s to LocalClient.Create
I0815 23:06:22.754606 1432906 start.go:167] duration metric: took 9.278044688s to libmachine.API.Create "addons-858013"
I0815 23:06:22.754616 1432906 start.go:293] postStartSetup for "addons-858013" (driver="docker")
I0815 23:06:22.754626 1432906 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0815 23:06:22.754678 1432906 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0815 23:06:22.754726 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:22.771127 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:22.866135 1432906 ssh_runner.go:195] Run: cat /etc/os-release
I0815 23:06:22.869244 1432906 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0815 23:06:22.869291 1432906 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0815 23:06:22.869309 1432906 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0815 23:06:22.869317 1432906 info.go:137] Remote host: Ubuntu 22.04.4 LTS
I0815 23:06:22.869327 1432906 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-1426744/.minikube/addons for local assets ...
I0815 23:06:22.869408 1432906 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-1426744/.minikube/files for local assets ...
I0815 23:06:22.869435 1432906 start.go:296] duration metric: took 114.812116ms for postStartSetup
I0815 23:06:22.869742 1432906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-858013
I0815 23:06:22.886010 1432906 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/config.json ...
I0815 23:06:22.886313 1432906 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0815 23:06:22.886376 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:22.914166 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:23.001784 1432906 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0815 23:06:23.008237 1432906 start.go:128] duration metric: took 9.53478286s to createHost
I0815 23:06:23.008281 1432906 start.go:83] releasing machines lock for "addons-858013", held for 9.534946707s
I0815 23:06:23.008372 1432906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-858013
I0815 23:06:23.026071 1432906 ssh_runner.go:195] Run: cat /version.json
I0815 23:06:23.026126 1432906 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0815 23:06:23.026133 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:23.026219 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:23.052550 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:23.067333 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:23.140886 1432906 ssh_runner.go:195] Run: systemctl --version
I0815 23:06:23.274441 1432906 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0815 23:06:23.278726 1432906 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0815 23:06:23.303789 1432906 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0815 23:06:23.303871 1432906 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0815 23:06:23.333717 1432906 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0815 23:06:23.333742 1432906 start.go:495] detecting cgroup driver to use...
I0815 23:06:23.333774 1432906 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0815 23:06:23.333827 1432906 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0815 23:06:23.346431 1432906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0815 23:06:23.358386 1432906 docker.go:217] disabling cri-docker service (if available) ...
I0815 23:06:23.358453 1432906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0815 23:06:23.372713 1432906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0815 23:06:23.387392 1432906 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0815 23:06:23.477569 1432906 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0815 23:06:23.570504 1432906 docker.go:233] disabling docker service ...
I0815 23:06:23.570583 1432906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0815 23:06:23.590378 1432906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0815 23:06:23.602659 1432906 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0815 23:06:23.686286 1432906 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0815 23:06:23.781898 1432906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0815 23:06:23.793757 1432906 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0815 23:06:23.809562 1432906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0815 23:06:23.819665 1432906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0815 23:06:23.829738 1432906 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0815 23:06:23.829808 1432906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0815 23:06:23.839858 1432906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0815 23:06:23.849709 1432906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0815 23:06:23.859439 1432906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0815 23:06:23.869820 1432906 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0815 23:06:23.879697 1432906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0815 23:06:23.890145 1432906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0815 23:06:23.900500 1432906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0815 23:06:23.911155 1432906 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0815 23:06:23.919329 1432906 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0815 23:06:23.928036 1432906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0815 23:06:24.008501 1432906 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0815 23:06:24.151382 1432906 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0815 23:06:24.151553 1432906 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0815 23:06:24.155931 1432906 start.go:563] Will wait 60s for crictl version
I0815 23:06:24.156067 1432906 ssh_runner.go:195] Run: which crictl
I0815 23:06:24.159713 1432906 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0815 23:06:24.200843 1432906 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.20
RuntimeApiVersion: v1
I0815 23:06:24.200944 1432906 ssh_runner.go:195] Run: containerd --version
I0815 23:06:24.222839 1432906 ssh_runner.go:195] Run: containerd --version
I0815 23:06:24.246528 1432906 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
I0815 23:06:24.249074 1432906 cli_runner.go:164] Run: docker network inspect addons-858013 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0815 23:06:24.263629 1432906 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0815 23:06:24.267607 1432906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0815 23:06:24.277802 1432906 kubeadm.go:883] updating cluster {Name:addons-858013 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-858013 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0815 23:06:24.277918 1432906 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
I0815 23:06:24.277987 1432906 ssh_runner.go:195] Run: sudo crictl images --output json
I0815 23:06:24.313495 1432906 containerd.go:627] all images are preloaded for containerd runtime.
I0815 23:06:24.313522 1432906 containerd.go:534] Images already preloaded, skipping extraction
I0815 23:06:24.313583 1432906 ssh_runner.go:195] Run: sudo crictl images --output json
I0815 23:06:24.347758 1432906 containerd.go:627] all images are preloaded for containerd runtime.
I0815 23:06:24.347781 1432906 cache_images.go:84] Images are preloaded, skipping loading
I0815 23:06:24.347789 1432906 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
I0815 23:06:24.347885 1432906 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-858013 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.31.0 ClusterName:addons-858013 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0815 23:06:24.347953 1432906 ssh_runner.go:195] Run: sudo crictl info
I0815 23:06:24.387426 1432906 cni.go:84] Creating CNI manager for ""
I0815 23:06:24.387453 1432906 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0815 23:06:24.387463 1432906 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0815 23:06:24.387506 1432906 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-858013 NodeName:addons-858013 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0815 23:06:24.387687 1432906 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "addons-858013"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0815 23:06:24.387764 1432906 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
I0815 23:06:24.396685 1432906 binaries.go:44] Found k8s binaries, skipping transfer
I0815 23:06:24.396783 1432906 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0815 23:06:24.405278 1432906 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
I0815 23:06:24.422580 1432906 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0815 23:06:24.440326 1432906 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
I0815 23:06:24.457646 1432906 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0815 23:06:24.460778 1432906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0815 23:06:24.471220 1432906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0815 23:06:24.551618 1432906 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0815 23:06:24.568320 1432906 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013 for IP: 192.168.49.2
I0815 23:06:24.568354 1432906 certs.go:194] generating shared ca certs ...
I0815 23:06:24.568379 1432906 certs.go:226] acquiring lock for ca certs: {Name:mk6675e2307c4a6ff8a3648b5a131049979eec99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0815 23:06:24.568532 1432906 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-1426744/.minikube/ca.key
I0815 23:06:24.852136 1432906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-1426744/.minikube/ca.crt ...
I0815 23:06:24.852170 1432906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-1426744/.minikube/ca.crt: {Name:mk6e27e3ded05a9f1e6a575c49f7bda40718df02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0815 23:06:24.853062 1432906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-1426744/.minikube/ca.key ...
I0815 23:06:24.853094 1432906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-1426744/.minikube/ca.key: {Name:mk2d04cd5821c0177f3c4daff8381f1719488ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0815 23:06:24.853738 1432906 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-1426744/.minikube/proxy-client-ca.key
I0815 23:06:25.425966 1432906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-1426744/.minikube/proxy-client-ca.crt ...
I0815 23:06:25.426003 1432906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-1426744/.minikube/proxy-client-ca.crt: {Name:mk01dbcf0c5b8653d2ef588cb9a176a454c82eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0815 23:06:25.426779 1432906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-1426744/.minikube/proxy-client-ca.key ...
I0815 23:06:25.426795 1432906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-1426744/.minikube/proxy-client-ca.key: {Name:mk40f284c28ca9f74cb1c6a33ecde7d99bad1705 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0815 23:06:25.427569 1432906 certs.go:256] generating profile certs ...
I0815 23:06:25.427665 1432906 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/client.key
I0815 23:06:25.427692 1432906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/client.crt with IP's: []
I0815 23:06:25.895715 1432906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/client.crt ...
I0815 23:06:25.895750 1432906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/client.crt: {Name:mke4b708c97c83fe8c1a38e4460656816844f94f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0815 23:06:25.896547 1432906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/client.key ...
I0815 23:06:25.896564 1432906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/client.key: {Name:mkb5b7d5dc0c94ab3fbb4c98c689493bca96f5f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0815 23:06:25.896658 1432906 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/apiserver.key.1d590a82
I0815 23:06:25.896678 1432906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/apiserver.crt.1d590a82 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0815 23:06:26.636862 1432906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/apiserver.crt.1d590a82 ...
I0815 23:06:26.636895 1432906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/apiserver.crt.1d590a82: {Name:mka0060772bb8141ee3aeb948731c513d2c13d0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0815 23:06:26.637087 1432906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/apiserver.key.1d590a82 ...
I0815 23:06:26.637104 1432906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/apiserver.key.1d590a82: {Name:mk9a938d749b6eca3aeb5a6cdaf0e66f06861288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0815 23:06:26.637216 1432906 certs.go:381] copying /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/apiserver.crt.1d590a82 -> /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/apiserver.crt
I0815 23:06:26.637306 1432906 certs.go:385] copying /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/apiserver.key.1d590a82 -> /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/apiserver.key
I0815 23:06:26.637360 1432906 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/proxy-client.key
I0815 23:06:26.637381 1432906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/proxy-client.crt with IP's: []
I0815 23:06:27.288856 1432906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/proxy-client.crt ...
I0815 23:06:27.288887 1432906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/proxy-client.crt: {Name:mk562b980c59d8f57e2aedd1516b502b564b4190 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0815 23:06:27.289073 1432906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/proxy-client.key ...
I0815 23:06:27.289087 1432906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/proxy-client.key: {Name:mkf1f16e74427e1da2803f0e1bb0d27cf4e0af24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0815 23:06:27.289300 1432906 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-1426744/.minikube/certs/ca-key.pem (1679 bytes)
I0815 23:06:27.289345 1432906 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-1426744/.minikube/certs/ca.pem (1078 bytes)
I0815 23:06:27.289375 1432906 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-1426744/.minikube/certs/cert.pem (1123 bytes)
I0815 23:06:27.289403 1432906 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-1426744/.minikube/certs/key.pem (1679 bytes)
I0815 23:06:27.289979 1432906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-1426744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0815 23:06:27.314984 1432906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-1426744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0815 23:06:27.338003 1432906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-1426744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0815 23:06:27.361484 1432906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-1426744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0815 23:06:27.384659 1432906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0815 23:06:27.408346 1432906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0815 23:06:27.431518 1432906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0815 23:06:27.454748 1432906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-1426744/.minikube/profiles/addons-858013/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0815 23:06:27.477916 1432906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-1426744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0815 23:06:27.501425 1432906 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0815 23:06:27.519058 1432906 ssh_runner.go:195] Run: openssl version
I0815 23:06:27.524535 1432906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0815 23:06:27.534409 1432906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0815 23:06:27.538492 1432906 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
I0815 23:06:27.538603 1432906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0815 23:06:27.545941 1432906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0815 23:06:27.555389 1432906 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0815 23:06:27.559487 1432906 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0815 23:06:27.559584 1432906 kubeadm.go:392] StartCluster: {Name:addons-858013 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-858013 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0815 23:06:27.559696 1432906 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0815 23:06:27.559798 1432906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0815 23:06:27.601283 1432906 cri.go:89] found id: ""
I0815 23:06:27.601400 1432906 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0815 23:06:27.612507 1432906 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0815 23:06:27.621518 1432906 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0815 23:06:27.621588 1432906 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0815 23:06:27.630231 1432906 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0815 23:06:27.630251 1432906 kubeadm.go:157] found existing configuration files:
I0815 23:06:27.630304 1432906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0815 23:06:27.638624 1432906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0815 23:06:27.638687 1432906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0815 23:06:27.647074 1432906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0815 23:06:27.657152 1432906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0815 23:06:27.657220 1432906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0815 23:06:27.665773 1432906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0815 23:06:27.675047 1432906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0815 23:06:27.675181 1432906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0815 23:06:27.683977 1432906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0815 23:06:27.692838 1432906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0815 23:06:27.692953 1432906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0815 23:06:27.701298 1432906 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0815 23:06:27.742947 1432906 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
I0815 23:06:27.743762 1432906 kubeadm.go:310] [preflight] Running pre-flight checks
I0815 23:06:27.761432 1432906 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0815 23:06:27.761580 1432906 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1067-aws[0m
I0815 23:06:27.761650 1432906 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0815 23:06:27.761714 1432906 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0815 23:06:27.761791 1432906 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0815 23:06:27.761888 1432906 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0815 23:06:27.761974 1432906 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0815 23:06:27.762036 1432906 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0815 23:06:27.762100 1432906 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0815 23:06:27.762160 1432906 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0815 23:06:27.762225 1432906 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0815 23:06:27.762288 1432906 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0815 23:06:27.821742 1432906 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0815 23:06:27.821913 1432906 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0815 23:06:27.822058 1432906 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0815 23:06:27.827319 1432906 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0815 23:06:27.830896 1432906 out.go:235] - Generating certificates and keys ...
I0815 23:06:27.831080 1432906 kubeadm.go:310] [certs] Using existing ca certificate authority
I0815 23:06:27.831201 1432906 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0815 23:06:28.913987 1432906 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0815 23:06:29.787423 1432906 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0815 23:06:30.275609 1432906 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0815 23:06:30.723932 1432906 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0815 23:06:31.031710 1432906 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0815 23:06:31.031994 1432906 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-858013 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0815 23:06:31.345090 1432906 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0815 23:06:31.345437 1432906 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-858013 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0815 23:06:32.305398 1432906 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0815 23:06:32.735851 1432906 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0815 23:06:33.018502 1432906 kubeadm.go:310] [certs] Generating "sa" key and public key
I0815 23:06:33.018778 1432906 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0815 23:06:33.463258 1432906 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0815 23:06:33.792543 1432906 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0815 23:06:34.009969 1432906 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0815 23:06:34.194301 1432906 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0815 23:06:34.386134 1432906 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0815 23:06:34.386918 1432906 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0815 23:06:34.392005 1432906 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0815 23:06:34.394031 1432906 out.go:235] - Booting up control plane ...
I0815 23:06:34.394126 1432906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0815 23:06:34.394200 1432906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0815 23:06:34.394999 1432906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0815 23:06:34.405595 1432906 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0815 23:06:34.412094 1432906 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0815 23:06:34.412148 1432906 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0815 23:06:34.525926 1432906 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0815 23:06:34.526395 1432906 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0815 23:06:36.029636 1432906 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.503429165s
I0815 23:06:36.029745 1432906 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0815 23:06:42.031192 1432906 kubeadm.go:310] [api-check] The API server is healthy after 6.001867907s
I0815 23:06:42.059835 1432906 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0815 23:06:42.077108 1432906 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0815 23:06:42.112433 1432906 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0815 23:06:42.112642 1432906 kubeadm.go:310] [mark-control-plane] Marking the node addons-858013 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0815 23:06:42.125298 1432906 kubeadm.go:310] [bootstrap-token] Using token: 9kpno7.9xnb1jlvqy081p9t
I0815 23:06:42.127643 1432906 out.go:235] - Configuring RBAC rules ...
I0815 23:06:42.127783 1432906 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0815 23:06:42.137231 1432906 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0815 23:06:42.151185 1432906 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0815 23:06:42.159202 1432906 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0815 23:06:42.165735 1432906 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0815 23:06:42.171489 1432906 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0815 23:06:42.438120 1432906 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0815 23:06:42.864529 1432906 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0815 23:06:43.438618 1432906 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0815 23:06:43.439841 1432906 kubeadm.go:310]
I0815 23:06:43.439912 1432906 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0815 23:06:43.439918 1432906 kubeadm.go:310]
I0815 23:06:43.439993 1432906 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0815 23:06:43.439998 1432906 kubeadm.go:310]
I0815 23:06:43.440022 1432906 kubeadm.go:310] mkdir -p $HOME/.kube
I0815 23:06:43.440079 1432906 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0815 23:06:43.440128 1432906 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0815 23:06:43.440133 1432906 kubeadm.go:310]
I0815 23:06:43.440184 1432906 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0815 23:06:43.440189 1432906 kubeadm.go:310]
I0815 23:06:43.440258 1432906 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0815 23:06:43.440263 1432906 kubeadm.go:310]
I0815 23:06:43.440313 1432906 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0815 23:06:43.440386 1432906 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0815 23:06:43.440452 1432906 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0815 23:06:43.440456 1432906 kubeadm.go:310]
I0815 23:06:43.440537 1432906 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0815 23:06:43.440611 1432906 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0815 23:06:43.440615 1432906 kubeadm.go:310]
I0815 23:06:43.440696 1432906 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9kpno7.9xnb1jlvqy081p9t \
I0815 23:06:43.440795 1432906 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:32669f4de8f99e6c0e1b9ac322d232cfeb01cf3bf047d97f10a64a1a8635640a \
I0815 23:06:43.440816 1432906 kubeadm.go:310] --control-plane
I0815 23:06:43.440821 1432906 kubeadm.go:310]
I0815 23:06:43.440902 1432906 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0815 23:06:43.440906 1432906 kubeadm.go:310]
I0815 23:06:43.440985 1432906 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9kpno7.9xnb1jlvqy081p9t \
I0815 23:06:43.441083 1432906 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:32669f4de8f99e6c0e1b9ac322d232cfeb01cf3bf047d97f10a64a1a8635640a
I0815 23:06:43.444742 1432906 kubeadm.go:310] W0815 23:06:27.738944 1029 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0815 23:06:43.445041 1432906 kubeadm.go:310] W0815 23:06:27.739647 1029 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0815 23:06:43.445267 1432906 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
I0815 23:06:43.445369 1432906 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0815 23:06:43.445384 1432906 cni.go:84] Creating CNI manager for ""
I0815 23:06:43.445392 1432906 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0815 23:06:43.447764 1432906 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0815 23:06:43.449885 1432906 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0815 23:06:43.453782 1432906 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
I0815 23:06:43.453803 1432906 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0815 23:06:43.471634 1432906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0815 23:06:43.743393 1432906 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0815 23:06:43.743532 1432906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0815 23:06:43.743628 1432906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-858013 minikube.k8s.io/updated_at=2024_08_15T23_06_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=addons-858013 minikube.k8s.io/primary=true
I0815 23:06:43.936918 1432906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0815 23:06:43.937016 1432906 ops.go:34] apiserver oom_adj: -16
I0815 23:06:44.437059 1432906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0815 23:06:44.937991 1432906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0815 23:06:45.437917 1432906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0815 23:06:45.936998 1432906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0815 23:06:46.437044 1432906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0815 23:06:46.937596 1432906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0815 23:06:47.437628 1432906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0815 23:06:47.537713 1432906 kubeadm.go:1113] duration metric: took 3.794223105s to wait for elevateKubeSystemPrivileges
I0815 23:06:47.537739 1432906 kubeadm.go:394] duration metric: took 19.978159881s to StartCluster
I0815 23:06:47.537755 1432906 settings.go:142] acquiring lock: {Name:mk04eec9d963add10e61814118070c2ffabcc53c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0815 23:06:47.538562 1432906 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19452-1426744/kubeconfig
I0815 23:06:47.538942 1432906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-1426744/kubeconfig: {Name:mkd0ceb99da18ce0b4af7c49a7baba0665c5eba0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0815 23:06:47.539137 1432906 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0815 23:06:47.539268 1432906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0815 23:06:47.539508 1432906 config.go:182] Loaded profile config "addons-858013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 23:06:47.539535 1432906 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0815 23:06:47.539606 1432906 addons.go:69] Setting yakd=true in profile "addons-858013"
I0815 23:06:47.539627 1432906 addons.go:234] Setting addon yakd=true in "addons-858013"
I0815 23:06:47.539650 1432906 host.go:66] Checking if "addons-858013" exists ...
I0815 23:06:47.540116 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:47.540246 1432906 addons.go:69] Setting inspektor-gadget=true in profile "addons-858013"
I0815 23:06:47.540266 1432906 addons.go:234] Setting addon inspektor-gadget=true in "addons-858013"
I0815 23:06:47.540286 1432906 host.go:66] Checking if "addons-858013" exists ...
I0815 23:06:47.540644 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:47.541090 1432906 addons.go:69] Setting metrics-server=true in profile "addons-858013"
I0815 23:06:47.541119 1432906 addons.go:234] Setting addon metrics-server=true in "addons-858013"
I0815 23:06:47.541170 1432906 host.go:66] Checking if "addons-858013" exists ...
I0815 23:06:47.541560 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:47.542739 1432906 addons.go:69] Setting cloud-spanner=true in profile "addons-858013"
I0815 23:06:47.542776 1432906 addons.go:234] Setting addon cloud-spanner=true in "addons-858013"
I0815 23:06:47.542807 1432906 host.go:66] Checking if "addons-858013" exists ...
I0815 23:06:47.543215 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:47.545553 1432906 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-858013"
I0815 23:06:47.545637 1432906 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-858013"
I0815 23:06:47.545703 1432906 host.go:66] Checking if "addons-858013" exists ...
I0815 23:06:47.546261 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:47.555592 1432906 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-858013"
I0815 23:06:47.555669 1432906 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-858013"
I0815 23:06:47.555700 1432906 host.go:66] Checking if "addons-858013" exists ...
I0815 23:06:47.556154 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:47.556710 1432906 addons.go:69] Setting registry=true in profile "addons-858013"
I0815 23:06:47.556748 1432906 addons.go:234] Setting addon registry=true in "addons-858013"
I0815 23:06:47.556777 1432906 host.go:66] Checking if "addons-858013" exists ...
I0815 23:06:47.557272 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:47.580646 1432906 addons.go:69] Setting storage-provisioner=true in profile "addons-858013"
I0815 23:06:47.580703 1432906 addons.go:234] Setting addon storage-provisioner=true in "addons-858013"
I0815 23:06:47.580743 1432906 host.go:66] Checking if "addons-858013" exists ...
I0815 23:06:47.580046 1432906 addons.go:69] Setting default-storageclass=true in profile "addons-858013"
I0815 23:06:47.581265 1432906 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-858013"
I0815 23:06:47.581508 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:47.588133 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:47.592150 1432906 addons.go:69] Setting gcp-auth=true in profile "addons-858013"
I0815 23:06:47.592214 1432906 mustload.go:65] Loading cluster: addons-858013
I0815 23:06:47.592421 1432906 config.go:182] Loaded profile config "addons-858013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 23:06:47.592682 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:47.593976 1432906 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-858013"
I0815 23:06:47.610469 1432906 addons.go:69] Setting volcano=true in profile "addons-858013"
I0815 23:06:47.615548 1432906 addons.go:234] Setting addon volcano=true in "addons-858013"
I0815 23:06:47.615701 1432906 host.go:66] Checking if "addons-858013" exists ...
I0815 23:06:47.619785 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:47.625332 1432906 addons.go:69] Setting ingress=true in profile "addons-858013"
I0815 23:06:47.625945 1432906 addons.go:234] Setting addon ingress=true in "addons-858013"
I0815 23:06:47.626014 1432906 host.go:66] Checking if "addons-858013" exists ...
I0815 23:06:47.626495 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:47.640860 1432906 addons.go:69] Setting volumesnapshots=true in profile "addons-858013"
I0815 23:06:47.640903 1432906 addons.go:234] Setting addon volumesnapshots=true in "addons-858013"
I0815 23:06:47.640941 1432906 host.go:66] Checking if "addons-858013" exists ...
I0815 23:06:47.641474 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:47.647986 1432906 addons.go:69] Setting ingress-dns=true in profile "addons-858013"
I0815 23:06:47.648044 1432906 addons.go:234] Setting addon ingress-dns=true in "addons-858013"
I0815 23:06:47.648091 1432906 host.go:66] Checking if "addons-858013" exists ...
I0815 23:06:47.648568 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:47.656721 1432906 out.go:177] * Verifying Kubernetes components...
I0815 23:06:47.662166 1432906 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0815 23:06:47.664857 1432906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0815 23:06:47.665060 1432906 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0815 23:06:47.665075 1432906 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0815 23:06:47.665321 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:47.667378 1432906 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
I0815 23:06:47.668964 1432906 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0815 23:06:47.668983 1432906 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0815 23:06:47.669048 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:47.679800 1432906 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-858013"
I0815 23:06:47.680133 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:47.723799 1432906 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
I0815 23:06:47.725956 1432906 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0815 23:06:47.725976 1432906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0815 23:06:47.726042 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:47.754055 1432906 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0815 23:06:47.757280 1432906 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0815 23:06:47.759200 1432906 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0815 23:06:47.769279 1432906 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0815 23:06:47.770678 1432906 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
I0815 23:06:47.774675 1432906 out.go:177] - Using image docker.io/registry:2.8.3
I0815 23:06:47.775967 1432906 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0815 23:06:47.792226 1432906 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0815 23:06:47.792288 1432906 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0815 23:06:47.792379 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:47.795162 1432906 addons.go:234] Setting addon default-storageclass=true in "addons-858013"
I0815 23:06:47.795234 1432906 host.go:66] Checking if "addons-858013" exists ...
I0815 23:06:47.795673 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:47.811233 1432906 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0815 23:06:47.811516 1432906 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0815 23:06:47.811942 1432906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0815 23:06:47.812012 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:47.811523 1432906 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
I0815 23:06:47.834795 1432906 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0815 23:06:47.811528 1432906 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0815 23:06:47.837320 1432906 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
I0815 23:06:47.839787 1432906 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
I0815 23:06:47.839925 1432906 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0815 23:06:47.839936 1432906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0815 23:06:47.840000 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:47.840187 1432906 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0815 23:06:47.841949 1432906 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0815 23:06:47.843991 1432906 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0815 23:06:47.844039 1432906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0815 23:06:47.844125 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:47.811616 1432906 host.go:66] Checking if "addons-858013" exists ...
I0815 23:06:47.854165 1432906 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0815 23:06:47.862327 1432906 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0815 23:06:47.869366 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:47.871116 1432906 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-858013"
I0815 23:06:47.871156 1432906 host.go:66] Checking if "addons-858013" exists ...
I0815 23:06:47.871591 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:47.874529 1432906 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0815 23:06:47.874548 1432906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0815 23:06:47.874616 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:47.892084 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:47.895237 1432906 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0815 23:06:47.899005 1432906 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0815 23:06:47.900487 1432906 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0815 23:06:47.900503 1432906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0815 23:06:47.900567 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:47.903954 1432906 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0815 23:06:47.907573 1432906 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0815 23:06:47.907598 1432906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0815 23:06:47.907670 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:47.939312 1432906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0815 23:06:47.939339 1432906 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0815 23:06:47.939412 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:47.948593 1432906 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0815 23:06:47.956274 1432906 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0815 23:06:47.956297 1432906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0815 23:06:47.956398 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:47.967173 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:48.000466 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:48.018455 1432906 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0815 23:06:48.018478 1432906 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0815 23:06:48.018544 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:48.018846 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:48.056439 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:48.098013 1432906 out.go:177] - Using image docker.io/busybox:stable
I0815 23:06:48.100365 1432906 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0815 23:06:48.103144 1432906 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0815 23:06:48.103167 1432906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0815 23:06:48.103246 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:48.105680 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:48.128888 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:48.131153 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:48.141228 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:48.149535 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:48.152829 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:48.167351 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
W0815 23:06:48.174331 1432906 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0815 23:06:48.174369 1432906 retry.go:31] will retry after 145.903834ms: ssh: handshake failed: EOF
I0815 23:06:48.180735 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:48.466589 1432906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0815 23:06:48.488986 1432906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0815 23:06:48.497569 1432906 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0815 23:06:48.497639 1432906 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0815 23:06:48.514630 1432906 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0815 23:06:48.514695 1432906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0815 23:06:48.548308 1432906 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0815 23:06:48.548378 1432906 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0815 23:06:48.574605 1432906 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0815 23:06:48.574681 1432906 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0815 23:06:48.607033 1432906 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0815 23:06:48.607105 1432906 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0815 23:06:48.633046 1432906 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0815 23:06:48.633203 1432906 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.093919122s)
I0815 23:06:48.633370 1432906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0815 23:06:48.642359 1432906 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0815 23:06:48.642431 1432906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0815 23:06:48.680253 1432906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0815 23:06:48.694244 1432906 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0815 23:06:48.694325 1432906 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0815 23:06:48.716930 1432906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0815 23:06:48.731197 1432906 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0815 23:06:48.731269 1432906 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0815 23:06:48.733116 1432906 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0815 23:06:48.733209 1432906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0815 23:06:48.736020 1432906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0815 23:06:48.748956 1432906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0815 23:06:48.771091 1432906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0815 23:06:48.795201 1432906 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0815 23:06:48.795271 1432906 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0815 23:06:48.797239 1432906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0815 23:06:48.806894 1432906 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0815 23:06:48.806968 1432906 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0815 23:06:48.868078 1432906 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0815 23:06:48.868149 1432906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0815 23:06:48.915928 1432906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0815 23:06:48.918790 1432906 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0815 23:06:48.918858 1432906 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0815 23:06:48.959826 1432906 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0815 23:06:48.959913 1432906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0815 23:06:49.018601 1432906 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0815 23:06:49.018676 1432906 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0815 23:06:49.018893 1432906 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0815 23:06:49.018944 1432906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0815 23:06:49.027490 1432906 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0815 23:06:49.027564 1432906 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0815 23:06:49.076845 1432906 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0815 23:06:49.076917 1432906 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0815 23:06:49.191526 1432906 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0815 23:06:49.191600 1432906 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0815 23:06:49.285862 1432906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0815 23:06:49.304786 1432906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0815 23:06:49.340388 1432906 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0815 23:06:49.340464 1432906 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0815 23:06:49.354029 1432906 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0815 23:06:49.354053 1432906 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0815 23:06:49.376673 1432906 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0815 23:06:49.376758 1432906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0815 23:06:49.528691 1432906 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0815 23:06:49.528773 1432906 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0815 23:06:49.546725 1432906 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0815 23:06:49.546796 1432906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0815 23:06:49.731424 1432906 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0815 23:06:49.731495 1432906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0815 23:06:50.062577 1432906 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0815 23:06:50.062647 1432906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0815 23:06:50.136517 1432906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0815 23:06:50.136602 1432906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0815 23:06:50.235887 1432906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0815 23:06:50.469743 1432906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0815 23:06:50.469826 1432906 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0815 23:06:50.490906 1432906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0815 23:06:50.672364 1432906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.205687882s)
I0815 23:06:50.901740 1432906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0815 23:06:50.901813 1432906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0815 23:06:51.217173 1432906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0815 23:06:51.217241 1432906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0815 23:06:51.296930 1432906 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.663521235s)
I0815 23:06:51.297050 1432906 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0815 23:06:51.296966 1432906 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.663851498s)
I0815 23:06:51.297977 1432906 node_ready.go:35] waiting up to 6m0s for node "addons-858013" to be "Ready" ...
I0815 23:06:51.298578 1432906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.809516913s)
I0815 23:06:51.302189 1432906 node_ready.go:49] node "addons-858013" has status "Ready":"True"
I0815 23:06:51.302249 1432906 node_ready.go:38] duration metric: took 4.254967ms for node "addons-858013" to be "Ready" ...
I0815 23:06:51.302274 1432906 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0815 23:06:51.312312 1432906 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rp4nh" in "kube-system" namespace to be "Ready" ...
I0815 23:06:51.806763 1432906 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-858013" context rescaled to 1 replicas
I0815 23:06:51.824932 1432906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0815 23:06:51.824953 1432906 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0815 23:06:52.466459 1432906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0815 23:06:53.350874 1432906 pod_ready.go:103] pod "coredns-6f6b679f8f-rp4nh" in "kube-system" namespace has status "Ready":"False"
I0815 23:06:55.080032 1432906 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0815 23:06:55.080211 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:55.116395 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:55.584288 1432906 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0815 23:06:55.656652 1432906 addons.go:234] Setting addon gcp-auth=true in "addons-858013"
I0815 23:06:55.656705 1432906 host.go:66] Checking if "addons-858013" exists ...
I0815 23:06:55.657344 1432906 cli_runner.go:164] Run: docker container inspect addons-858013 --format={{.State.Status}}
I0815 23:06:55.686950 1432906 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0815 23:06:55.687011 1432906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-858013
I0815 23:06:55.725869 1432906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34617 SSHKeyPath:/home/jenkins/minikube-integration/19452-1426744/.minikube/machines/addons-858013/id_rsa Username:docker}
I0815 23:06:55.817896 1432906 pod_ready.go:103] pod "coredns-6f6b679f8f-rp4nh" in "kube-system" namespace has status "Ready":"False"
I0815 23:06:56.453853 1432906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.773528572s)
I0815 23:06:56.453883 1432906 addons.go:475] Verifying addon ingress=true in "addons-858013"
I0815 23:06:56.454132 1432906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.737132304s)
I0815 23:06:56.454270 1432906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.718185053s)
I0815 23:06:56.454392 1432906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.705360659s)
I0815 23:06:56.454572 1432906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.683414576s)
I0815 23:06:56.456960 1432906 out.go:177] * Verifying ingress addon...
I0815 23:06:56.459880 1432906 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
W0815 23:06:56.474110 1432906 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I0815 23:06:56.476048 1432906 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0815 23:06:56.476115 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:06:56.987860 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:06:57.468157 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:06:57.830871 1432906 pod_ready.go:103] pod "coredns-6f6b679f8f-rp4nh" in "kube-system" namespace has status "Ready":"False"
I0815 23:06:57.841083 1432906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.043769446s)
I0815 23:06:57.841203 1432906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.925200396s)
I0815 23:06:57.841252 1432906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.555321581s)
I0815 23:06:57.841796 1432906 addons.go:475] Verifying addon registry=true in "addons-858013"
I0815 23:06:57.841308 1432906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.536438559s)
I0815 23:06:57.841942 1432906 addons.go:475] Verifying addon metrics-server=true in "addons-858013"
I0815 23:06:57.841363 1432906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.60540315s)
I0815 23:06:57.841434 1432906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.350454039s)
W0815 23:06:57.842051 1432906 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0815 23:06:57.842099 1432906 retry.go:31] will retry after 365.553844ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0815 23:06:57.844506 1432906 out.go:177] * Verifying registry addon...
I0815 23:06:57.844695 1432906 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-858013 service yakd-dashboard -n yakd-dashboard
I0815 23:06:57.848183 1432906 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0815 23:06:57.913675 1432906 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0815 23:06:57.913705 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:06:58.026517 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:06:58.207914 1432906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0815 23:06:58.353637 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:06:58.464965 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:06:58.829251 1432906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.362701547s)
I0815 23:06:58.829284 1432906 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-858013"
I0815 23:06:58.829421 1432906 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.14244778s)
I0815 23:06:58.831846 1432906 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
I0815 23:06:58.831899 1432906 out.go:177] * Verifying csi-hostpath-driver addon...
I0815 23:06:58.834907 1432906 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0815 23:06:58.838518 1432906 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0815 23:06:58.840428 1432906 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0815 23:06:58.840445 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:06:58.841241 1432906 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0815 23:06:58.841265 1432906 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0815 23:06:58.864971 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:06:58.898780 1432906 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0815 23:06:58.898804 1432906 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0815 23:06:58.922178 1432906 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0815 23:06:58.922198 1432906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0815 23:06:58.964312 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:06:58.994035 1432906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0815 23:06:59.340564 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:06:59.354722 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:06:59.465651 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:06:59.840301 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:06:59.851577 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:06:59.881967 1432906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.674002679s)
I0815 23:06:59.965543 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:00.242965 1432906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.248885772s)
I0815 23:07:00.259438 1432906 addons.go:475] Verifying addon gcp-auth=true in "addons-858013"
I0815 23:07:00.263767 1432906 out.go:177] * Verifying gcp-auth addon...
I0815 23:07:00.274134 1432906 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0815 23:07:00.277940 1432906 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0815 23:07:00.325672 1432906 pod_ready.go:103] pod "coredns-6f6b679f8f-rp4nh" in "kube-system" namespace has status "Ready":"False"
I0815 23:07:00.380461 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:00.380935 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:00.479848 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:00.840841 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:00.852344 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:00.965210 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:01.380076 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:01.381631 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:01.480236 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:01.840129 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:01.852698 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:01.965264 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:02.379792 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:02.381643 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:02.465205 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:02.820452 1432906 pod_ready.go:103] pod "coredns-6f6b679f8f-rp4nh" in "kube-system" namespace has status "Ready":"False"
I0815 23:07:02.840231 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:02.852642 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:02.964259 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:03.340394 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:03.352450 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:03.464670 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:03.880290 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:03.881380 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:03.964285 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:04.379423 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:04.381879 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:04.480895 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:04.840424 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:04.852829 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:04.966380 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:05.341299 1432906 pod_ready.go:103] pod "coredns-6f6b679f8f-rp4nh" in "kube-system" namespace has status "Ready":"False"
I0815 23:07:05.343727 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:05.352642 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:05.464596 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:05.888253 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:05.889371 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:05.964253 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:06.320323 1432906 pod_ready.go:93] pod "coredns-6f6b679f8f-rp4nh" in "kube-system" namespace has status "Ready":"True"
I0815 23:07:06.320351 1432906 pod_ready.go:82] duration metric: took 15.007953403s for pod "coredns-6f6b679f8f-rp4nh" in "kube-system" namespace to be "Ready" ...
I0815 23:07:06.320364 1432906 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-wjh96" in "kube-system" namespace to be "Ready" ...
I0815 23:07:06.322923 1432906 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-wjh96" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-wjh96" not found
I0815 23:07:06.322951 1432906 pod_ready.go:82] duration metric: took 2.579248ms for pod "coredns-6f6b679f8f-wjh96" in "kube-system" namespace to be "Ready" ...
E0815 23:07:06.322962 1432906 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-wjh96" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-wjh96" not found
I0815 23:07:06.322987 1432906 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-858013" in "kube-system" namespace to be "Ready" ...
I0815 23:07:06.329293 1432906 pod_ready.go:93] pod "etcd-addons-858013" in "kube-system" namespace has status "Ready":"True"
I0815 23:07:06.329361 1432906 pod_ready.go:82] duration metric: took 6.36073ms for pod "etcd-addons-858013" in "kube-system" namespace to be "Ready" ...
I0815 23:07:06.329392 1432906 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-858013" in "kube-system" namespace to be "Ready" ...
I0815 23:07:06.335058 1432906 pod_ready.go:93] pod "kube-apiserver-addons-858013" in "kube-system" namespace has status "Ready":"True"
I0815 23:07:06.335083 1432906 pod_ready.go:82] duration metric: took 5.667105ms for pod "kube-apiserver-addons-858013" in "kube-system" namespace to be "Ready" ...
I0815 23:07:06.335095 1432906 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-858013" in "kube-system" namespace to be "Ready" ...
I0815 23:07:06.339578 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:06.345902 1432906 pod_ready.go:93] pod "kube-controller-manager-addons-858013" in "kube-system" namespace has status "Ready":"True"
I0815 23:07:06.345929 1432906 pod_ready.go:82] duration metric: took 10.80786ms for pod "kube-controller-manager-addons-858013" in "kube-system" namespace to be "Ready" ...
I0815 23:07:06.345943 1432906 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4pp86" in "kube-system" namespace to be "Ready" ...
I0815 23:07:06.380455 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:06.482282 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:06.517748 1432906 pod_ready.go:93] pod "kube-proxy-4pp86" in "kube-system" namespace has status "Ready":"True"
I0815 23:07:06.517773 1432906 pod_ready.go:82] duration metric: took 171.792636ms for pod "kube-proxy-4pp86" in "kube-system" namespace to be "Ready" ...
I0815 23:07:06.517810 1432906 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-858013" in "kube-system" namespace to be "Ready" ...
I0815 23:07:06.841479 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:06.853795 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:06.918320 1432906 pod_ready.go:93] pod "kube-scheduler-addons-858013" in "kube-system" namespace has status "Ready":"True"
I0815 23:07:06.918392 1432906 pod_ready.go:82] duration metric: took 400.564378ms for pod "kube-scheduler-addons-858013" in "kube-system" namespace to be "Ready" ...
I0815 23:07:06.918418 1432906 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-89x6p" in "kube-system" namespace to be "Ready" ...
I0815 23:07:06.974808 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:07.339457 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:07.351426 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:07.465191 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:07.840094 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:07.851773 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:07.965473 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:08.340431 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:08.352528 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:08.471038 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:08.841002 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:08.852236 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:08.926379 1432906 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-89x6p" in "kube-system" namespace has status "Ready":"False"
I0815 23:07:08.965491 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:09.340877 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:09.352527 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:09.465580 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:09.840953 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:09.854023 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0815 23:07:09.965273 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:10.340650 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:10.351984 1432906 kapi.go:107] duration metric: took 12.503798801s to wait for kubernetes.io/minikube-addons=registry ...
I0815 23:07:10.466154 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:10.840799 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:10.964377 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:11.340006 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:11.426000 1432906 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-89x6p" in "kube-system" namespace has status "Ready":"False"
I0815 23:07:11.464944 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:11.845721 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:11.964909 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:12.384385 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:12.486165 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:12.849479 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:12.971157 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:13.340185 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:13.431110 1432906 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-89x6p" in "kube-system" namespace has status "Ready":"False"
I0815 23:07:13.487866 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:13.872941 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:13.970455 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:14.347158 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:14.480367 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:14.885434 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:14.965611 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:15.341042 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:15.464653 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:15.842644 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:15.924968 1432906 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-89x6p" in "kube-system" namespace has status "Ready":"False"
I0815 23:07:15.965189 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:16.340124 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:16.464285 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:16.887364 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:16.996444 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:17.340691 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:17.464330 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:17.841177 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:17.965189 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:18.341822 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:18.426988 1432906 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-89x6p" in "kube-system" namespace has status "Ready":"False"
I0815 23:07:18.466025 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:18.839833 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:18.964607 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:19.407329 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:19.464598 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:19.839807 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:19.964059 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:20.339574 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:20.464781 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:20.840466 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:20.925067 1432906 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-89x6p" in "kube-system" namespace has status "Ready":"False"
I0815 23:07:20.964581 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:21.341485 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:21.465890 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:21.839669 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:21.964392 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:22.341045 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:22.464856 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:22.839745 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:22.964065 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:23.341861 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:23.425662 1432906 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-89x6p" in "kube-system" namespace has status "Ready":"False"
I0815 23:07:23.467009 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:23.840222 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:23.965267 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:24.339409 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:24.464645 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:24.841122 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:24.965697 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:25.342377 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:25.426211 1432906 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-89x6p" in "kube-system" namespace has status "Ready":"False"
I0815 23:07:25.464105 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:25.839657 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:25.965994 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:26.339907 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:26.464958 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:26.839474 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:26.964656 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:27.341184 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:27.448483 1432906 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-89x6p" in "kube-system" namespace has status "Ready":"False"
I0815 23:07:27.467571 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:27.846215 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:27.964369 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:28.340609 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:28.464914 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:28.840785 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:28.965284 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:29.340773 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:29.464878 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:29.839187 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:29.924671 1432906 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-89x6p" in "kube-system" namespace has status "Ready":"False"
I0815 23:07:29.964457 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:30.340590 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:30.471638 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:30.840178 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:30.964878 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:31.340772 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:31.465399 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:31.840053 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:31.924997 1432906 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-89x6p" in "kube-system" namespace has status "Ready":"False"
I0815 23:07:31.964714 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:32.340192 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:32.465682 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:32.887825 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:32.984560 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:33.340552 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:33.464237 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:33.840186 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:33.964191 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:34.340423 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:34.424416 1432906 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-89x6p" in "kube-system" namespace has status "Ready":"False"
I0815 23:07:34.465305 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:34.842706 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:34.964919 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:35.341091 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:35.464497 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:35.881782 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:35.982378 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:36.339614 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:36.429040 1432906 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-89x6p" in "kube-system" namespace has status "Ready":"False"
I0815 23:07:36.469755 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:36.840745 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:36.964621 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:37.339275 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:37.464389 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:37.840011 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:37.965531 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:38.339888 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:38.464452 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:38.840470 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:38.925847 1432906 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-89x6p" in "kube-system" namespace has status "Ready":"False"
I0815 23:07:38.964459 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:39.339583 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:39.464759 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:39.839359 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:39.924391 1432906 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-89x6p" in "kube-system" namespace has status "Ready":"True"
I0815 23:07:39.924417 1432906 pod_ready.go:82] duration metric: took 33.005977106s for pod "nvidia-device-plugin-daemonset-89x6p" in "kube-system" namespace to be "Ready" ...
I0815 23:07:39.924427 1432906 pod_ready.go:39] duration metric: took 48.622121256s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0815 23:07:39.924468 1432906 api_server.go:52] waiting for apiserver process to appear ...
I0815 23:07:39.924552 1432906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0815 23:07:39.942018 1432906 api_server.go:72] duration metric: took 52.402854775s to wait for apiserver process to appear ...
I0815 23:07:39.942046 1432906 api_server.go:88] waiting for apiserver healthz status ...
I0815 23:07:39.942069 1432906 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0815 23:07:39.950449 1432906 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0815 23:07:39.951463 1432906 api_server.go:141] control plane version: v1.31.0
I0815 23:07:39.951488 1432906 api_server.go:131] duration metric: took 9.434664ms to wait for apiserver health ...
I0815 23:07:39.951496 1432906 system_pods.go:43] waiting for kube-system pods to appear ...
I0815 23:07:39.960957 1432906 system_pods.go:59] 18 kube-system pods found
I0815 23:07:39.960993 1432906 system_pods.go:61] "coredns-6f6b679f8f-rp4nh" [db0d0ea3-a6b8-4760-b57b-f62e22fc977c] Running
I0815 23:07:39.961000 1432906 system_pods.go:61] "csi-hostpath-attacher-0" [8c34ee53-079d-4d06-9f9c-58a7723f421b] Running
I0815 23:07:39.961005 1432906 system_pods.go:61] "csi-hostpath-resizer-0" [992f9408-abc8-4ec6-b7d0-bb23ec11910f] Running
I0815 23:07:39.961036 1432906 system_pods.go:61] "csi-hostpathplugin-tstkq" [17ce4001-a262-445c-97b1-c5e80a5afdbe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0815 23:07:39.961050 1432906 system_pods.go:61] "etcd-addons-858013" [b7e35e09-0de7-4030-b0bf-f2d7d07b99a6] Running
I0815 23:07:39.961057 1432906 system_pods.go:61] "kindnet-pj2dh" [c087035d-81cd-4799-8507-73ad9e34e978] Running
I0815 23:07:39.961068 1432906 system_pods.go:61] "kube-apiserver-addons-858013" [9320c4b5-18b5-4df2-a666-b05af7b4b8b4] Running
I0815 23:07:39.961093 1432906 system_pods.go:61] "kube-controller-manager-addons-858013" [a47593aa-3382-470c-84af-5965ce5b8f80] Running
I0815 23:07:39.961108 1432906 system_pods.go:61] "kube-ingress-dns-minikube" [836d7509-203d-412a-92b2-e3ee78eb0c58] Running
I0815 23:07:39.961112 1432906 system_pods.go:61] "kube-proxy-4pp86" [71d9ab1a-c995-45f6-8d09-fa813657999e] Running
I0815 23:07:39.961124 1432906 system_pods.go:61] "kube-scheduler-addons-858013" [7e19d8cb-19e6-4761-85a9-db6648e5442b] Running
I0815 23:07:39.961327 1432906 system_pods.go:61] "metrics-server-8988944d9-nk895" [3644ef7d-3f82-4177-ae94-7d3040f0955f] Running
I0815 23:07:39.961342 1432906 system_pods.go:61] "nvidia-device-plugin-daemonset-89x6p" [707df689-f654-4888-8964-e9a3f923d28b] Running
I0815 23:07:39.961347 1432906 system_pods.go:61] "registry-6fb4cdfc84-754z8" [0945994d-4bbb-4a17-b58f-c430bdba570c] Running
I0815 23:07:39.961351 1432906 system_pods.go:61] "registry-proxy-vgh7t" [d1e1bad6-4bcf-47da-94ed-c325eb427c7d] Running
I0815 23:07:39.961355 1432906 system_pods.go:61] "snapshot-controller-56fcc65765-6b55s" [2ff68f50-eee7-4d17-ac1a-6c40abbf7930] Running
I0815 23:07:39.961362 1432906 system_pods.go:61] "snapshot-controller-56fcc65765-8x6gf" [23fdb9cc-93a5-49ed-a12a-b5e507d6f2bd] Running
I0815 23:07:39.961372 1432906 system_pods.go:61] "storage-provisioner" [482151cf-a7d4-44cd-9cfa-da27b6188c9b] Running
I0815 23:07:39.961391 1432906 system_pods.go:74] duration metric: took 9.876352ms to wait for pod list to return data ...
I0815 23:07:39.961405 1432906 default_sa.go:34] waiting for default service account to be created ...
I0815 23:07:39.964509 1432906 default_sa.go:45] found service account: "default"
I0815 23:07:39.964532 1432906 default_sa.go:55] duration metric: took 3.118913ms for default service account to be created ...
I0815 23:07:39.964542 1432906 system_pods.go:116] waiting for k8s-apps to be running ...
I0815 23:07:39.965529 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:39.973788 1432906 system_pods.go:86] 18 kube-system pods found
I0815 23:07:39.973827 1432906 system_pods.go:89] "coredns-6f6b679f8f-rp4nh" [db0d0ea3-a6b8-4760-b57b-f62e22fc977c] Running
I0815 23:07:39.973908 1432906 system_pods.go:89] "csi-hostpath-attacher-0" [8c34ee53-079d-4d06-9f9c-58a7723f421b] Running
I0815 23:07:39.973929 1432906 system_pods.go:89] "csi-hostpath-resizer-0" [992f9408-abc8-4ec6-b7d0-bb23ec11910f] Running
I0815 23:07:39.973938 1432906 system_pods.go:89] "csi-hostpathplugin-tstkq" [17ce4001-a262-445c-97b1-c5e80a5afdbe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0815 23:07:39.973945 1432906 system_pods.go:89] "etcd-addons-858013" [b7e35e09-0de7-4030-b0bf-f2d7d07b99a6] Running
I0815 23:07:39.973954 1432906 system_pods.go:89] "kindnet-pj2dh" [c087035d-81cd-4799-8507-73ad9e34e978] Running
I0815 23:07:39.973960 1432906 system_pods.go:89] "kube-apiserver-addons-858013" [9320c4b5-18b5-4df2-a666-b05af7b4b8b4] Running
I0815 23:07:39.973971 1432906 system_pods.go:89] "kube-controller-manager-addons-858013" [a47593aa-3382-470c-84af-5965ce5b8f80] Running
I0815 23:07:39.973976 1432906 system_pods.go:89] "kube-ingress-dns-minikube" [836d7509-203d-412a-92b2-e3ee78eb0c58] Running
I0815 23:07:39.973981 1432906 system_pods.go:89] "kube-proxy-4pp86" [71d9ab1a-c995-45f6-8d09-fa813657999e] Running
I0815 23:07:39.973993 1432906 system_pods.go:89] "kube-scheduler-addons-858013" [7e19d8cb-19e6-4761-85a9-db6648e5442b] Running
I0815 23:07:39.973998 1432906 system_pods.go:89] "metrics-server-8988944d9-nk895" [3644ef7d-3f82-4177-ae94-7d3040f0955f] Running
I0815 23:07:39.974002 1432906 system_pods.go:89] "nvidia-device-plugin-daemonset-89x6p" [707df689-f654-4888-8964-e9a3f923d28b] Running
I0815 23:07:39.974012 1432906 system_pods.go:89] "registry-6fb4cdfc84-754z8" [0945994d-4bbb-4a17-b58f-c430bdba570c] Running
I0815 23:07:39.974016 1432906 system_pods.go:89] "registry-proxy-vgh7t" [d1e1bad6-4bcf-47da-94ed-c325eb427c7d] Running
I0815 23:07:39.974020 1432906 system_pods.go:89] "snapshot-controller-56fcc65765-6b55s" [2ff68f50-eee7-4d17-ac1a-6c40abbf7930] Running
I0815 23:07:39.974029 1432906 system_pods.go:89] "snapshot-controller-56fcc65765-8x6gf" [23fdb9cc-93a5-49ed-a12a-b5e507d6f2bd] Running
I0815 23:07:39.974033 1432906 system_pods.go:89] "storage-provisioner" [482151cf-a7d4-44cd-9cfa-da27b6188c9b] Running
I0815 23:07:39.974040 1432906 system_pods.go:126] duration metric: took 9.49329ms to wait for k8s-apps to be running ...
I0815 23:07:39.974050 1432906 system_svc.go:44] waiting for kubelet service to be running ....
I0815 23:07:39.974110 1432906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0815 23:07:39.987828 1432906 system_svc.go:56] duration metric: took 13.757141ms WaitForService to wait for kubelet
I0815 23:07:39.987858 1432906 kubeadm.go:582] duration metric: took 52.448699352s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0815 23:07:39.987879 1432906 node_conditions.go:102] verifying NodePressure condition ...
I0815 23:07:39.991416 1432906 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0815 23:07:39.991456 1432906 node_conditions.go:123] node cpu capacity is 2
I0815 23:07:39.991468 1432906 node_conditions.go:105] duration metric: took 3.583592ms to run NodePressure ...
I0815 23:07:39.991480 1432906 start.go:241] waiting for startup goroutines ...
I0815 23:07:40.340506 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:40.465023 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:40.839911 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:40.965335 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:41.339750 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:41.465451 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:41.880077 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:41.964737 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:42.339930 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:42.464080 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:42.840867 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:42.964357 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:43.340121 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:43.464314 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:43.839615 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:43.970079 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:44.339988 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:44.466389 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:44.840098 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:44.964674 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:45.339830 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:45.463745 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:45.840593 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:45.964631 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:46.340289 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:46.465057 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:46.840714 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:46.964934 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:47.382441 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:47.482833 1432906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0815 23:07:47.840219 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:47.964595 1432906 kapi.go:107] duration metric: took 51.504712083s to wait for app.kubernetes.io/name=ingress-nginx ...
I0815 23:07:48.350529 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:48.842211 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:49.340904 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:49.840546 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:50.381769 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:50.880440 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:51.347935 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:51.881170 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:52.340158 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:52.843002 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:53.340567 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:53.880638 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:54.340083 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:54.839812 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:55.340105 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0815 23:07:55.839512 1432906 kapi.go:107] duration metric: took 57.004603319s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0815 23:08:23.278696 1432906 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0815 23:08:23.278720 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0815 23:08:23.777226 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0815 23:08:24.277907 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0815 23:08:24.777981 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0815 23:08:25.281763 1432906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0815 23:08:25.777792 1432906 kapi.go:107] duration metric: took 1m25.503659051s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0815 23:08:25.780046 1432906 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-858013 cluster.
I0815 23:08:25.783744 1432906 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0815 23:08:25.785694 1432906 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0815 23:08:25.787896 1432906 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, nvidia-device-plugin, default-storageclass, volcano, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
I0815 23:08:25.790685 1432906 addons.go:510] duration metric: took 1m38.251144402s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner nvidia-device-plugin default-storageclass volcano metrics-server inspektor-gadget yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
I0815 23:08:25.790732 1432906 start.go:246] waiting for cluster config update ...
I0815 23:08:25.790755 1432906 start.go:255] writing updated cluster config ...
I0815 23:08:25.791047 1432906 ssh_runner.go:195] Run: rm -f paused
I0815 23:08:26.144158 1432906 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
I0815 23:08:26.147424 1432906 out.go:177] * Done! kubectl is now configured to use "addons-858013" cluster and "default" namespace by default
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
7dd551892da45 e2d3313f65753 About a minute ago Exited gadget 5 fa8a4c259cb45 gadget-drw5c
1e3c4cfd13d69 6ef582f3ec844 3 minutes ago Running gcp-auth 0 63e595a5fb43d gcp-auth-89d5ffd79-jrmcw
e6ece2391c256 ee6d597e62dc8 3 minutes ago Running csi-snapshotter 0 6301c4f7ea6bc csi-hostpathplugin-tstkq
b2ac396b34e79 642ded511e141 3 minutes ago Running csi-provisioner 0 6301c4f7ea6bc csi-hostpathplugin-tstkq
4776955cf4284 922312104da8a 3 minutes ago Running liveness-probe 0 6301c4f7ea6bc csi-hostpathplugin-tstkq
8121b9d7c16dd 08f6b2990811a 3 minutes ago Running hostpath 0 6301c4f7ea6bc csi-hostpathplugin-tstkq
0a5100359480a 0107d56dbc0be 3 minutes ago Running node-driver-registrar 0 6301c4f7ea6bc csi-hostpathplugin-tstkq
7a1feb8806928 8b46b1cd48760 3 minutes ago Running admission 0 e6ac11ae3c87b volcano-admission-77d7d48b68-xlm2l
1740af5d5c7d0 24f8f979639f1 3 minutes ago Running controller 0 e8a5c5f02ae0a ingress-nginx-controller-7559cbf597-crhhh
bd828d88fb9ee 53af6e2c4c343 4 minutes ago Running cloud-spanner-emulator 0 91690fbc8bd7d cloud-spanner-emulator-c4bc9b5f8-99727
d49c5fd997547 a9bac31a5be8d 4 minutes ago Running nvidia-device-plugin-ctr 0 b10fc449c92ad nvidia-device-plugin-daemonset-89x6p
a2f0ffcb6162a 296b5f799fcd8 4 minutes ago Exited patch 2 57b255433b161 ingress-nginx-admission-patch-nbfd8
93142972170f4 487fa743e1e22 4 minutes ago Running csi-resizer 0 829d001d169c1 csi-hostpath-resizer-0
8b373859adaf7 1461903ec4fe9 4 minutes ago Running csi-external-health-monitor-controller 0 6301c4f7ea6bc csi-hostpathplugin-tstkq
5fdaa30a3d80b d9c7ad4c226bf 4 minutes ago Running volcano-scheduler 0 60304c9692f20 volcano-scheduler-576bc46687-6nckj
c60fc314cce50 1505f556b3a7b 4 minutes ago Running volcano-controllers 0 e6b3e3ef4beba volcano-controllers-56675bb4d5-gm6cq
833b8af71d409 9a80d518f102c 4 minutes ago Running csi-attacher 0 581fed77be056 csi-hostpath-attacher-0
f2e262489a939 296b5f799fcd8 4 minutes ago Exited create 0 8686440c54dd4 ingress-nginx-admission-create-7ft94
168ce3f14fe9b 7ce2150c8929b 4 minutes ago Running local-path-provisioner 0 6d5b60cd70a2d local-path-provisioner-86d989889c-plkjf
c65d6ddb1db76 4d1e5c3e97420 4 minutes ago Running volume-snapshot-controller 0 c648a549c8f56 snapshot-controller-56fcc65765-8x6gf
0210834cbcd47 4d1e5c3e97420 4 minutes ago Running volume-snapshot-controller 0 5ce7216ff939b snapshot-controller-56fcc65765-6b55s
7469b6c7420b4 77bdba588b953 4 minutes ago Running yakd 0 56e7db5a25036 yakd-dashboard-67d98fc6b-n4hth
c8b3813f8dfa4 95dccb4df54ab 4 minutes ago Running metrics-server 0 3c5a82fc0561a metrics-server-8988944d9-nk895
7e83452ad869b 3410e1561990a 4 minutes ago Running registry-proxy 0 2b16fc917da15 registry-proxy-vgh7t
e25fbc71e26b3 6fed88f43b276 4 minutes ago Running registry 0 c2e5978716f9f registry-6fb4cdfc84-754z8
012c16ad8e8cb 2437cf7621777 4 minutes ago Running coredns 0 92ebaaf7e8538 coredns-6f6b679f8f-rp4nh
e5c6d020f7301 35508c2f890c4 4 minutes ago Running minikube-ingress-dns 0 3910a1287c73c kube-ingress-dns-minikube
98d3e00aa513e ba04bb24b9575 4 minutes ago Running storage-provisioner 0 cf6694baa7909 storage-provisioner
87b4c7a49873a 6a23fa8fd2b78 4 minutes ago Running kindnet-cni 0 d9c5b9cd47966 kindnet-pj2dh
5b82b02c1b503 71d55d66fd4ee 4 minutes ago Running kube-proxy 0 027720c3eab4b kube-proxy-4pp86
5f9b28a6c6ae7 fbbbd428abb4d 5 minutes ago Running kube-scheduler 0 c1784a5440ffd kube-scheduler-addons-858013
2b62c09982135 cd0f0ae0ec9e0 5 minutes ago Running kube-apiserver 0 7bda7aeeced5c kube-apiserver-addons-858013
5dd1dcddb8037 fcb0683e6bdbd 5 minutes ago Running kube-controller-manager 0 9a8778e778c25 kube-controller-manager-addons-858013
be959497517bb 27e3830e14027 5 minutes ago Running etcd 0 272acc49ccf6f etcd-addons-858013
==> containerd <==
Aug 15 23:09:04 addons-858013 containerd[814]: time="2024-08-15T23:09:04.899680583Z" level=info msg="CreateContainer within sandbox \"fa8a4c259cb45f94fd1fc6d31120dd6a863fb82837f5fc25ad50bde567045ae7\" for &ContainerMetadata{Name:gadget,Attempt:4,} returns container id \"22373dd9819419ab841cd43df7e38ccece4b75b1df749da6188a3658b36596ba\""
Aug 15 23:09:04 addons-858013 containerd[814]: time="2024-08-15T23:09:04.900391202Z" level=info msg="StartContainer for \"22373dd9819419ab841cd43df7e38ccece4b75b1df749da6188a3658b36596ba\""
Aug 15 23:09:04 addons-858013 containerd[814]: time="2024-08-15T23:09:04.948555590Z" level=info msg="StartContainer for \"22373dd9819419ab841cd43df7e38ccece4b75b1df749da6188a3658b36596ba\" returns successfully"
Aug 15 23:09:06 addons-858013 containerd[814]: time="2024-08-15T23:09:06.303510183Z" level=info msg="shim disconnected" id=22373dd9819419ab841cd43df7e38ccece4b75b1df749da6188a3658b36596ba namespace=k8s.io
Aug 15 23:09:06 addons-858013 containerd[814]: time="2024-08-15T23:09:06.303978406Z" level=warning msg="cleaning up after shim disconnected" id=22373dd9819419ab841cd43df7e38ccece4b75b1df749da6188a3658b36596ba namespace=k8s.io
Aug 15 23:09:06 addons-858013 containerd[814]: time="2024-08-15T23:09:06.304009241Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Aug 15 23:09:06 addons-858013 containerd[814]: time="2024-08-15T23:09:06.836265014Z" level=info msg="RemoveContainer for \"aeea1062dc26c25ded4295b26a65f0238be9b6261bb9799daedc2fa02348dc28\""
Aug 15 23:09:06 addons-858013 containerd[814]: time="2024-08-15T23:09:06.843498808Z" level=info msg="RemoveContainer for \"aeea1062dc26c25ded4295b26a65f0238be9b6261bb9799daedc2fa02348dc28\" returns successfully"
Aug 15 23:10:30 addons-858013 containerd[814]: time="2024-08-15T23:10:30.749491073Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\""
Aug 15 23:10:30 addons-858013 containerd[814]: time="2024-08-15T23:10:30.914815785Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Aug 15 23:10:30 addons-858013 containerd[814]: time="2024-08-15T23:10:30.916268539Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc: active requests=0, bytes read=89"
Aug 15 23:10:30 addons-858013 containerd[814]: time="2024-08-15T23:10:30.920201020Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" with image id \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\", size \"69907666\" in 170.663942ms"
Aug 15 23:10:30 addons-858013 containerd[814]: time="2024-08-15T23:10:30.920247404Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" returns image reference \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\""
Aug 15 23:10:30 addons-858013 containerd[814]: time="2024-08-15T23:10:30.922263716Z" level=info msg="CreateContainer within sandbox \"fa8a4c259cb45f94fd1fc6d31120dd6a863fb82837f5fc25ad50bde567045ae7\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
Aug 15 23:10:30 addons-858013 containerd[814]: time="2024-08-15T23:10:30.945293398Z" level=info msg="CreateContainer within sandbox \"fa8a4c259cb45f94fd1fc6d31120dd6a863fb82837f5fc25ad50bde567045ae7\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0\""
Aug 15 23:10:30 addons-858013 containerd[814]: time="2024-08-15T23:10:30.946717876Z" level=info msg="StartContainer for \"7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0\""
Aug 15 23:10:31 addons-858013 containerd[814]: time="2024-08-15T23:10:31.009629689Z" level=info msg="StartContainer for \"7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0\" returns successfully"
Aug 15 23:10:32 addons-858013 containerd[814]: time="2024-08-15T23:10:32.124064558Z" level=error msg="ExecSync for \"7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0\" failed" error="failed to exec in container: failed to start exec \"90a10a88efd155049d24878526c422ca52fda81037494efc239f0955b33a40c2\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
Aug 15 23:10:32 addons-858013 containerd[814]: time="2024-08-15T23:10:32.141017438Z" level=error msg="ExecSync for \"7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0\" failed" error="failed to exec in container: failed to start exec \"1f926122cef7dccfe49f022691512b90706bc9a1f3e53282065e5b9d36b16449\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
Aug 15 23:10:32 addons-858013 containerd[814]: time="2024-08-15T23:10:32.160627908Z" level=error msg="ExecSync for \"7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0\" failed" error="failed to exec in container: failed to start exec \"c583217d566eec29f4254938d7cf2ef49178f361948e4dff41350103d7186518\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
Aug 15 23:10:32 addons-858013 containerd[814]: time="2024-08-15T23:10:32.292394420Z" level=info msg="shim disconnected" id=7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0 namespace=k8s.io
Aug 15 23:10:32 addons-858013 containerd[814]: time="2024-08-15T23:10:32.292452455Z" level=warning msg="cleaning up after shim disconnected" id=7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0 namespace=k8s.io
Aug 15 23:10:32 addons-858013 containerd[814]: time="2024-08-15T23:10:32.292464360Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Aug 15 23:10:33 addons-858013 containerd[814]: time="2024-08-15T23:10:33.087321017Z" level=info msg="RemoveContainer for \"22373dd9819419ab841cd43df7e38ccece4b75b1df749da6188a3658b36596ba\""
Aug 15 23:10:33 addons-858013 containerd[814]: time="2024-08-15T23:10:33.094628583Z" level=info msg="RemoveContainer for \"22373dd9819419ab841cd43df7e38ccece4b75b1df749da6188a3658b36596ba\" returns successfully"
==> coredns [012c16ad8e8cb9801c7bb36ff96b6df841a939a7f8ccc0252232fb7cd79ad1f0] <==
[INFO] 10.244.0.4:58324 - 14279 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059454s
[INFO] 10.244.0.4:54269 - 13716 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00215826s
[INFO] 10.244.0.4:54269 - 13466 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002171889s
[INFO] 10.244.0.4:44154 - 20177 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000087934s
[INFO] 10.244.0.4:44154 - 58323 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000056262s
[INFO] 10.244.0.4:59150 - 50022 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000088074s
[INFO] 10.244.0.4:59150 - 14201 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000035503s
[INFO] 10.244.0.4:49013 - 14752 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00004832s
[INFO] 10.244.0.4:49013 - 19875 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00003822s
[INFO] 10.244.0.4:53956 - 11245 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034592s
[INFO] 10.244.0.4:53956 - 64494 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000416s
[INFO] 10.244.0.4:51630 - 22455 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002017363s
[INFO] 10.244.0.4:51630 - 2985 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001951484s
[INFO] 10.244.0.4:49738 - 6668 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000066724s
[INFO] 10.244.0.4:49738 - 22030 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000045933s
[INFO] 10.244.0.24:48849 - 5553 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.004304886s
[INFO] 10.244.0.24:60721 - 49033 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.004598637s
[INFO] 10.244.0.24:59077 - 26154 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123988s
[INFO] 10.244.0.24:41049 - 54985 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000102752s
[INFO] 10.244.0.24:34153 - 59799 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129468s
[INFO] 10.244.0.24:41989 - 1355 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000107298s
[INFO] 10.244.0.24:51521 - 64079 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.005684762s
[INFO] 10.244.0.24:38457 - 871 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.005669206s
[INFO] 10.244.0.24:58016 - 38282 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001621532s
[INFO] 10.244.0.24:40901 - 47698 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000840054s
==> describe nodes <==
Name: addons-858013
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=addons-858013
kubernetes.io/os=linux
minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
minikube.k8s.io/name=addons-858013
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_08_15T23_06_43_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-858013
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-858013"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 15 Aug 2024 23:06:40 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-858013
AcquireTime: <unset>
RenewTime: Thu, 15 Aug 2024 23:11:39 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 15 Aug 2024 23:08:45 +0000 Thu, 15 Aug 2024 23:06:37 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 15 Aug 2024 23:08:45 +0000 Thu, 15 Aug 2024 23:06:37 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 15 Aug 2024 23:08:45 +0000 Thu, 15 Aug 2024 23:06:37 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 15 Aug 2024 23:08:45 +0000 Thu, 15 Aug 2024 23:06:40 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-858013
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022364Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022364Ki
pods: 110
System Info:
Machine ID: edf3a3e63f344f23bb76b40e106793fa
System UUID: 0d83e17f-7e5f-44b0-a2ad-0e4f64cc60e2
Boot ID: b8353367-6c23-495b-9e1b-e1ab13f1b466
Kernel Version: 5.15.0-1067-aws
OS Image: Ubuntu 22.04.4 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.20
Kubelet Version: v1.31.0
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (27 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default cloud-spanner-emulator-c4bc9b5f8-99727 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m54s
gadget gadget-drw5c 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m51s
gcp-auth gcp-auth-89d5ffd79-jrmcw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m21s
ingress-nginx ingress-nginx-controller-7559cbf597-crhhh 100m (5%) 0 (0%) 90Mi (1%) 0 (0%) 4m48s
kube-system coredns-6f6b679f8f-rp4nh 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 4m56s
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m46s
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m46s
kube-system csi-hostpathplugin-tstkq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m46s
kube-system etcd-addons-858013 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 5m1s
kube-system kindnet-pj2dh 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 4m57s
kube-system kube-apiserver-addons-858013 250m (12%) 0 (0%) 0 (0%) 0 (0%) 5m1s
kube-system kube-controller-manager-addons-858013 200m (10%) 0 (0%) 0 (0%) 0 (0%) 5m1s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m53s
kube-system kube-proxy-4pp86 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m57s
kube-system kube-scheduler-addons-858013 100m (5%) 0 (0%) 0 (0%) 0 (0%) 5m2s
kube-system metrics-server-8988944d9-nk895 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 4m51s
kube-system nvidia-device-plugin-daemonset-89x6p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m54s
kube-system registry-6fb4cdfc84-754z8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m53s
kube-system registry-proxy-vgh7t 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m52s
kube-system snapshot-controller-56fcc65765-6b55s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m49s
kube-system snapshot-controller-56fcc65765-8x6gf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m49s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m51s
local-path-storage local-path-provisioner-86d989889c-plkjf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m50s
volcano-system volcano-admission-77d7d48b68-xlm2l 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m48s
volcano-system volcano-controllers-56675bb4d5-gm6cq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m47s
volcano-system volcano-scheduler-576bc46687-6nckj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m47s
yakd-dashboard yakd-dashboard-67d98fc6b-n4hth 0 (0%) 0 (0%) 128Mi (1%) 256Mi (3%) 4m50s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1050m (52%) 100m (5%)
memory 638Mi (8%) 476Mi (6%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m54s kube-proxy
Normal Starting 5m2s kubelet Starting kubelet.
Warning CgroupV1 5m2s kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 5m2s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 5m1s kubelet Node addons-858013 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m1s kubelet Node addons-858013 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m1s kubelet Node addons-858013 status is now: NodeHasSufficientPID
Normal RegisteredNode 4m58s node-controller Node addons-858013 event: Registered Node addons-858013 in Controller
==> dmesg <==
==> etcd [be959497517bb5c69191b79faa9dfde465311c0b2086d5d05fb1fb7140cf20c8] <==
{"level":"info","ts":"2024-08-15T23:06:36.733665Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-08-15T23:06:36.733772Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-08-15T23:06:36.733783Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-08-15T23:06:36.734954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
{"level":"info","ts":"2024-08-15T23:06:36.735032Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2024-08-15T23:06:37.117103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2024-08-15T23:06:37.117190Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2024-08-15T23:06:37.117206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2024-08-15T23:06:37.117231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2024-08-15T23:06:37.117449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-08-15T23:06:37.117525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2024-08-15T23:06:37.117605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-08-15T23:06:37.121323Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-858013 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2024-08-15T23:06:37.121634Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-15T23:06:37.121702Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-15T23:06:37.121964Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-15T23:06:37.122156Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-08-15T23:06:37.122252Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-08-15T23:06:37.122903Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-08-15T23:06:37.130122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-08-15T23:06:37.122939Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-08-15T23:06:37.124253Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-15T23:06:37.157254Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-15T23:06:37.157367Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-15T23:06:37.173403Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
==> gcp-auth [1e3c4cfd13d6964ece525bee9a6c2fd893b20d4fedc84558d033935a6f8f1564] <==
2024/08/15 23:08:25 GCP Auth Webhook started!
2024/08/15 23:08:42 Ready to marshal response ...
2024/08/15 23:08:42 Ready to write response ...
2024/08/15 23:08:43 Ready to marshal response ...
2024/08/15 23:08:43 Ready to write response ...
==> kernel <==
23:11:45 up 7:54, 0 users, load average: 0.64, 0.92, 0.64
Linux addons-858013 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.4 LTS"
==> kindnet [87b4c7a49873a10980d1fb6037d4a54e8ad074c2f389fe9fabcf4bb65f99e254] <==
E0815 23:10:37.523226 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
I0815 23:10:43.540498 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0815 23:10:43.540546 1 main.go:299] handling current node
W0815 23:10:46.644264 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
E0815 23:10:46.644315 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
I0815 23:10:53.540756 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0815 23:10:53.540792 1 main.go:299] handling current node
W0815 23:10:53.857372 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
E0815 23:10:53.857411 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
I0815 23:11:03.540514 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0815 23:11:03.540616 1 main.go:299] handling current node
I0815 23:11:13.541167 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0815 23:11:13.541203 1 main.go:299] handling current node
I0815 23:11:23.540750 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0815 23:11:23.540793 1 main.go:299] handling current node
I0815 23:11:33.541015 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0815 23:11:33.541049 1 main.go:299] handling current node
W0815 23:11:35.086582 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
E0815 23:11:35.086614 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
W0815 23:11:35.179246 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
E0815 23:11:35.179284 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
I0815 23:11:43.540776 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0815 23:11:43.540915 1 main.go:299] handling current node
W0815 23:11:44.565666 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
E0815 23:11:44.565705 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
==> kube-apiserver [2b62c09982135ce931bff052f8f2c4fb963c5ab5cc67748fd236ba47f2adf10a] <==
W0815 23:07:40.277077 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.180.127:443: connect: connection refused
W0815 23:07:41.286591 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.180.127:443: connect: connection refused
W0815 23:07:42.253805 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.254.89:443: connect: connection refused
E0815 23:07:42.253849 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.254.89:443: connect: connection refused" logger="UnhandledError"
W0815 23:07:42.255626 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.107.180.127:443: connect: connection refused
W0815 23:07:42.349064 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.180.127:443: connect: connection refused
W0815 23:07:43.402694 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.180.127:443: connect: connection refused
W0815 23:07:44.430257 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.180.127:443: connect: connection refused
W0815 23:07:45.490267 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.180.127:443: connect: connection refused
W0815 23:07:46.495214 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.180.127:443: connect: connection refused
W0815 23:07:47.584463 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.180.127:443: connect: connection refused
W0815 23:07:48.666637 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.180.127:443: connect: connection refused
W0815 23:07:49.697599 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.180.127:443: connect: connection refused
W0815 23:07:50.715675 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.180.127:443: connect: connection refused
W0815 23:07:51.754531 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.180.127:443: connect: connection refused
W0815 23:07:52.828494 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.180.127:443: connect: connection refused
W0815 23:07:53.880472 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.180.127:443: connect: connection refused
W0815 23:08:03.080499 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.254.89:443: connect: connection refused
E0815 23:08:03.080542 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.254.89:443: connect: connection refused" logger="UnhandledError"
W0815 23:08:03.262994 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.254.89:443: connect: connection refused
E0815 23:08:03.263038 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.254.89:443: connect: connection refused" logger="UnhandledError"
W0815 23:08:23.221028 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.254.89:443: connect: connection refused
E0815 23:08:23.221069 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.254.89:443: connect: connection refused" logger="UnhandledError"
I0815 23:08:42.666435 1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
I0815 23:08:42.703625 1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
==> kube-controller-manager [5dd1dcddb8037553ed1a298e27388659738c9d0bacb710241c53127be8ae631d] <==
I0815 23:08:03.283269 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
I0815 23:08:03.302218 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
I0815 23:08:04.645012 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
I0815 23:08:04.659656 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
I0815 23:08:05.773750 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
I0815 23:08:05.796031 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
I0815 23:08:06.783103 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
I0815 23:08:06.791224 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
I0815 23:08:06.799531 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
I0815 23:08:06.803667 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
I0815 23:08:06.812150 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
I0815 23:08:06.817039 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
I0815 23:08:14.937011 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-858013"
I0815 23:08:23.240901 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="22.626491ms"
I0815 23:08:23.262938 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="21.977673ms"
I0815 23:08:23.279234 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="16.169416ms"
I0815 23:08:23.279452 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="111.622µs"
I0815 23:08:25.746499 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="11.481655ms"
I0815 23:08:25.747094 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="68.045µs"
I0815 23:08:36.031964 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
I0815 23:08:36.034215 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
I0815 23:08:36.080363 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
I0815 23:08:36.083752 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
I0815 23:08:42.419256 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
I0815 23:08:45.586778 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-858013"
==> kube-proxy [5b82b02c1b503a54c55a4af442d6d162034d2f2f234733934f68fa69673e535b] <==
I0815 23:06:49.818990 1 server_linux.go:66] "Using iptables proxy"
I0815 23:06:49.936895 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0815 23:06:49.936963 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0815 23:06:49.984500 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0815 23:06:49.984560 1 server_linux.go:169] "Using iptables Proxier"
I0815 23:06:49.986434 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0815 23:06:49.987101 1 server.go:483] "Version info" version="v1.31.0"
I0815 23:06:49.987124 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0815 23:06:49.998745 1 config.go:197] "Starting service config controller"
I0815 23:06:49.998773 1 shared_informer.go:313] Waiting for caches to sync for service config
I0815 23:06:49.998793 1 config.go:104] "Starting endpoint slice config controller"
I0815 23:06:49.998797 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0815 23:06:50.002863 1 config.go:326] "Starting node config controller"
I0815 23:06:50.002883 1 shared_informer.go:313] Waiting for caches to sync for node config
I0815 23:06:50.099458 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0815 23:06:50.100726 1 shared_informer.go:320] Caches are synced for service config
I0815 23:06:50.105299 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [5f9b28a6c6ae7078f2ac4349d336cccf24aa7cd567602c926744a3992cb1e122] <==
W0815 23:06:40.954608 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0815 23:06:40.954654 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0815 23:06:40.954742 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0815 23:06:40.954788 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0815 23:06:40.954960 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0815 23:06:40.955012 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0815 23:06:40.955122 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0815 23:06:40.955169 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0815 23:06:40.955252 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0815 23:06:40.955295 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0815 23:06:40.955427 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0815 23:06:40.955474 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0815 23:06:40.955565 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0815 23:06:40.955610 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0815 23:06:40.955887 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0815 23:06:40.955951 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0815 23:06:40.956113 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0815 23:06:40.956163 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0815 23:06:40.956236 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0815 23:06:40.956277 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0815 23:06:40.956317 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0815 23:06:40.956357 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0815 23:06:40.956450 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0815 23:06:40.956501 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
I0815 23:06:41.945862 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Aug 15 23:10:16 addons-858013 kubelet[1478]: I0815 23:10:16.747676 1478 scope.go:117] "RemoveContainer" containerID="22373dd9819419ab841cd43df7e38ccece4b75b1df749da6188a3658b36596ba"
Aug 15 23:10:16 addons-858013 kubelet[1478]: E0815 23:10:16.748360 1478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-drw5c_gadget(a2141901-4df4-4465-a4ac-1dfc7c16ee9f)\"" pod="gadget/gadget-drw5c" podUID="a2141901-4df4-4465-a4ac-1dfc7c16ee9f"
Aug 15 23:10:22 addons-858013 kubelet[1478]: I0815 23:10:22.748382 1478 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-89x6p" secret="" err="secret \"gcp-auth\" not found"
Aug 15 23:10:30 addons-858013 kubelet[1478]: I0815 23:10:30.747314 1478 scope.go:117] "RemoveContainer" containerID="22373dd9819419ab841cd43df7e38ccece4b75b1df749da6188a3658b36596ba"
Aug 15 23:10:32 addons-858013 kubelet[1478]: E0815 23:10:32.124453 1478 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"90a10a88efd155049d24878526c422ca52fda81037494efc239f0955b33a40c2\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown" containerID="7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0" cmd=["/bin/gadgettracermanager","-liveness"]
Aug 15 23:10:32 addons-858013 kubelet[1478]: E0815 23:10:32.141490 1478 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"1f926122cef7dccfe49f022691512b90706bc9a1f3e53282065e5b9d36b16449\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown" containerID="7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0" cmd=["/bin/gadgettracermanager","-liveness"]
Aug 15 23:10:32 addons-858013 kubelet[1478]: E0815 23:10:32.161012 1478 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"c583217d566eec29f4254938d7cf2ef49178f361948e4dff41350103d7186518\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown" containerID="7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0" cmd=["/bin/gadgettracermanager","-liveness"]
Aug 15 23:10:33 addons-858013 kubelet[1478]: I0815 23:10:33.083931 1478 scope.go:117] "RemoveContainer" containerID="22373dd9819419ab841cd43df7e38ccece4b75b1df749da6188a3658b36596ba"
Aug 15 23:10:33 addons-858013 kubelet[1478]: I0815 23:10:33.084587 1478 scope.go:117] "RemoveContainer" containerID="7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0"
Aug 15 23:10:33 addons-858013 kubelet[1478]: E0815 23:10:33.084873 1478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-drw5c_gadget(a2141901-4df4-4465-a4ac-1dfc7c16ee9f)\"" pod="gadget/gadget-drw5c" podUID="a2141901-4df4-4465-a4ac-1dfc7c16ee9f"
Aug 15 23:10:34 addons-858013 kubelet[1478]: I0815 23:10:34.100848 1478 scope.go:117] "RemoveContainer" containerID="7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0"
Aug 15 23:10:34 addons-858013 kubelet[1478]: E0815 23:10:34.101089 1478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-drw5c_gadget(a2141901-4df4-4465-a4ac-1dfc7c16ee9f)\"" pod="gadget/gadget-drw5c" podUID="a2141901-4df4-4465-a4ac-1dfc7c16ee9f"
Aug 15 23:10:42 addons-858013 kubelet[1478]: I0815 23:10:42.747910 1478 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-vgh7t" secret="" err="secret \"gcp-auth\" not found"
Aug 15 23:10:47 addons-858013 kubelet[1478]: I0815 23:10:47.747494 1478 scope.go:117] "RemoveContainer" containerID="7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0"
Aug 15 23:10:47 addons-858013 kubelet[1478]: E0815 23:10:47.747705 1478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-drw5c_gadget(a2141901-4df4-4465-a4ac-1dfc7c16ee9f)\"" pod="gadget/gadget-drw5c" podUID="a2141901-4df4-4465-a4ac-1dfc7c16ee9f"
Aug 15 23:10:56 addons-858013 kubelet[1478]: I0815 23:10:56.747378 1478 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-754z8" secret="" err="secret \"gcp-auth\" not found"
Aug 15 23:10:58 addons-858013 kubelet[1478]: I0815 23:10:58.747995 1478 scope.go:117] "RemoveContainer" containerID="7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0"
Aug 15 23:10:58 addons-858013 kubelet[1478]: E0815 23:10:58.748659 1478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-drw5c_gadget(a2141901-4df4-4465-a4ac-1dfc7c16ee9f)\"" pod="gadget/gadget-drw5c" podUID="a2141901-4df4-4465-a4ac-1dfc7c16ee9f"
Aug 15 23:11:12 addons-858013 kubelet[1478]: I0815 23:11:12.748288 1478 scope.go:117] "RemoveContainer" containerID="7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0"
Aug 15 23:11:12 addons-858013 kubelet[1478]: E0815 23:11:12.748507 1478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-drw5c_gadget(a2141901-4df4-4465-a4ac-1dfc7c16ee9f)\"" pod="gadget/gadget-drw5c" podUID="a2141901-4df4-4465-a4ac-1dfc7c16ee9f"
Aug 15 23:11:24 addons-858013 kubelet[1478]: I0815 23:11:24.747193 1478 scope.go:117] "RemoveContainer" containerID="7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0"
Aug 15 23:11:24 addons-858013 kubelet[1478]: E0815 23:11:24.747395 1478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-drw5c_gadget(a2141901-4df4-4465-a4ac-1dfc7c16ee9f)\"" pod="gadget/gadget-drw5c" podUID="a2141901-4df4-4465-a4ac-1dfc7c16ee9f"
Aug 15 23:11:35 addons-858013 kubelet[1478]: I0815 23:11:35.747425 1478 scope.go:117] "RemoveContainer" containerID="7dd551892da4587b1bf6ae5eeb157ddfc07d1412746045a9761de9fb930207e0"
Aug 15 23:11:35 addons-858013 kubelet[1478]: E0815 23:11:35.747702 1478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-drw5c_gadget(a2141901-4df4-4465-a4ac-1dfc7c16ee9f)\"" pod="gadget/gadget-drw5c" podUID="a2141901-4df4-4465-a4ac-1dfc7c16ee9f"
Aug 15 23:11:43 addons-858013 kubelet[1478]: I0815 23:11:43.747366 1478 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-89x6p" secret="" err="secret \"gcp-auth\" not found"
==> storage-provisioner [98d3e00aa513e73f50e0ff8cffc86b41bb669cfd7d20e3414f678a06eaa345a2] <==
I0815 23:06:54.036679 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0815 23:06:54.093161 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0815 23:06:54.093253 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0815 23:06:54.114794 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0815 23:06:54.114968 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-858013_dfd50129-e5e3-4b05-ae44-9259d6d4a40d!
I0815 23:06:54.115850 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"664f2438-92d6-4f6f-a920-1b0dfcc3a804", APIVersion:"v1", ResourceVersion:"564", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-858013_dfd50129-e5e3-4b05-ae44-9259d6d4a40d became leader
I0815 23:06:54.225272 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-858013_dfd50129-e5e3-4b05-ae44-9259d6d4a40d!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-858013 -n addons-858013
helpers_test.go:261: (dbg) Run: kubectl --context addons-858013 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-7ft94 ingress-nginx-admission-patch-nbfd8 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-858013 describe pod ingress-nginx-admission-create-7ft94 ingress-nginx-admission-patch-nbfd8 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-858013 describe pod ingress-nginx-admission-create-7ft94 ingress-nginx-admission-patch-nbfd8 test-job-nginx-0: exit status 1 (89.692505ms)
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-7ft94" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-nbfd8" not found
Error from server (NotFound): pods "test-job-nginx-0" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-858013 describe pod ingress-nginx-admission-create-7ft94 ingress-nginx-admission-patch-nbfd8 test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (200.00s)