=== RUN TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 55.493941ms
addons_test.go:835: volcano-scheduler stabilized in 55.615835ms
addons_test.go:843: volcano-admission stabilized in 56.953741ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-79dc4b78bb-2vx88" [74b4a12a-ef6c-40d9-a5f6-e73012730d8a] Pending / Ready:ContainersNotReady (containers with unready status: [volcano-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [volcano-scheduler])
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "volcano-system" "app=volcano-scheduler" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:857: ***** TestAddons/serial/Volcano: pod "app=volcano-scheduler" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:857: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-348379 -n addons-348379
addons_test.go:857: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-23 11:48:52.760308473 +0000 UTC m=+799.270355820
addons_test.go:857: (dbg) Run: kubectl --context addons-348379 describe po volcano-scheduler-79dc4b78bb-2vx88 -n volcano-system
addons_test.go:857: (dbg) kubectl --context addons-348379 describe po volcano-scheduler-79dc4b78bb-2vx88 -n volcano-system:
Name: volcano-scheduler-79dc4b78bb-2vx88
Namespace: volcano-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Service Account: volcano-scheduler
Node: addons-348379/192.168.49.2
Start Time: Mon, 23 Sep 2024 11:36:49 +0000
Labels: app=volcano-scheduler
pod-template-hash=79dc4b78bb
Annotations: <none>
Status: Pending
IP: 10.244.0.19
IPs:
IP: 10.244.0.19
Controlled By: ReplicaSet/volcano-scheduler-79dc4b78bb
Containers:
volcano-scheduler:
Container ID:
Image: docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882
Image ID:
Port: <none>
Host Port: <none>
Args:
--logtostderr
--scheduler-conf=/volcano.scheduler/volcano-scheduler.conf
--enable-healthz=true
--enable-metrics=true
--leader-elect=false
-v=3
2>&1
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
DEBUG_SOCKET_DIR: /tmp/klog-socks
Mounts:
/tmp/klog-socks from klog-sock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hhr2p (ro)
/volcano.scheduler from scheduler-config (rw)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
scheduler-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: volcano-scheduler-configmap
Optional: false
klog-sock:
Type: HostPath (bare host directory volume)
Path: /tmp/klog-socks
HostPathType:
kube-api-access-hhr2p:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned volcano-system/volcano-scheduler-79dc4b78bb-2vx88 to addons-348379
Warning FailedCreatePodSandBox 12m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "10142cdcc6a633dec5ba8079e810795d3d461c87b8b526b2e68a3f0d683a7292": failed to find network info for sandbox "10142cdcc6a633dec5ba8079e810795d3d461c87b8b526b2e68a3f0d683a7292"
Normal Pulling 9m53s (x4 over 11m) kubelet Pulling image "docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
Warning Failed 9m53s (x4 over 11m) kubelet Failed to pull image "docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882": failed to pull and unpack image "docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882": failed to resolve reference "docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-scheduler/manifests/sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: 401 Unauthorized
Warning Failed 9m53s (x4 over 11m) kubelet Error: ErrImagePull
Warning Failed 9m39s (x6 over 11m) kubelet Error: ImagePullBackOff
Normal BackOff 2m2s (x38 over 11m) kubelet Back-off pulling image "docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
addons_test.go:857: (dbg) Run: kubectl --context addons-348379 logs volcano-scheduler-79dc4b78bb-2vx88 -n volcano-system
addons_test.go:857: (dbg) Non-zero exit: kubectl --context addons-348379 logs volcano-scheduler-79dc4b78bb-2vx88 -n volcano-system: exit status 1 (114.249635ms)
** stderr **
Error from server (BadRequest): container "volcano-scheduler" in pod "volcano-scheduler-79dc4b78bb-2vx88" is waiting to start: trying and failing to pull image
** /stderr **
addons_test.go:857: kubectl --context addons-348379 logs volcano-scheduler-79dc4b78bb-2vx88 -n volcano-system: exit status 1
addons_test.go:858: failed waiting for app=volcano-scheduler pod: app=volcano-scheduler within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-348379
helpers_test.go:235: (dbg) docker inspect addons-348379:
-- stdout --
[
{
"Id": "1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751",
"Created": "2024-09-23T11:36:14.086186806Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 2904413,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-09-23T11:36:14.226479889Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
"ResolvConfPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/hostname",
"HostsPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/hosts",
"LogPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751-json.log",
"Name": "/addons-348379",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-348379:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-348379",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3-init/diff:/var/lib/docker/overlay2/e2b16ea68ee0680d6b3555ff1ad64b95e5f88f6159373a302ec4d54fa432d99a/diff",
"MergedDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/merged",
"UpperDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/diff",
"WorkDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "addons-348379",
"Source": "/var/lib/docker/volumes/addons-348379/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "addons-348379",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-348379",
"name.minikube.sigs.k8s.io": "addons-348379",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "0be556c33c7ef40a6e70f0f396b5b7933e0b3ad3ca535a0d80fe041626578e74",
"SandboxKey": "/var/run/docker/netns/0be556c33c7e",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "41792"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "41793"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "41796"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "41794"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "41795"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-348379": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "d5b669f6f456e67aee85da3145f607fecadf2af36a162932dd5e9bc9ffffee31",
"EndpointID": "406dde2b8517c79d36b643a1ab3c5c13c7554eb0ba705342617104596090b341",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-348379",
"1973e07b6a14"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p addons-348379 -n addons-348379
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p addons-348379 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-348379 logs -n 25: (1.528062685s)
helpers_test.go:252: TestAddons/serial/Volcano logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-611017 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | |
| | -p download-only-611017 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
| delete | -p download-only-611017 | download-only-611017 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
| start | -o=json --download-only | download-only-423730 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | |
| | -p download-only-423730 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
| delete | -p download-only-423730 | download-only-423730 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
| delete | -p download-only-611017 | download-only-611017 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
| delete | -p download-only-423730 | download-only-423730 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
| start | --download-only -p | download-docker-021793 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | |
| | download-docker-021793 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p download-docker-021793 | download-docker-021793 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
| start | --download-only -p | binary-mirror-046209 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | |
| | binary-mirror-046209 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:34157 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p binary-mirror-046209 | binary-mirror-046209 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
| addons | enable dashboard -p | addons-348379 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | |
| | addons-348379 | | | | | |
| addons | disable dashboard -p | addons-348379 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | |
| | addons-348379 | | | | | |
| start | -p addons-348379 --wait=true | addons-348379 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:42 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/23 11:35:49
Running on machine: ip-172-31-31-251
Binary: Built with gc go1.23.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0923 11:35:49.628900 2903914 out.go:345] Setting OutFile to fd 1 ...
I0923 11:35:49.629020 2903914 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 11:35:49.629030 2903914 out.go:358] Setting ErrFile to fd 2...
I0923 11:35:49.629036 2903914 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 11:35:49.629290 2903914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
I0923 11:35:49.629765 2903914 out.go:352] Setting JSON to false
I0923 11:35:49.630698 2903914 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":155897,"bootTime":1726935453,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I0923 11:35:49.630769 2903914 start.go:139] virtualization:
I0923 11:35:49.632856 2903914 out.go:177] * [addons-348379] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I0923 11:35:49.634664 2903914 out.go:177] - MINIKUBE_LOCATION=19688
I0923 11:35:49.634735 2903914 notify.go:220] Checking for updates...
I0923 11:35:49.637703 2903914 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0923 11:35:49.639443 2903914 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
I0923 11:35:49.640961 2903914 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
I0923 11:35:49.642654 2903914 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0923 11:35:49.644154 2903914 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0923 11:35:49.646005 2903914 driver.go:394] Setting default libvirt URI to qemu:///system
I0923 11:35:49.674816 2903914 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
I0923 11:35:49.674959 2903914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0923 11:35:49.740942 2903914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 11:35:49.731429543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I0923 11:35:49.741058 2903914 docker.go:318] overlay module found
I0923 11:35:49.742922 2903914 out.go:177] * Using the docker driver based on user configuration
I0923 11:35:49.744404 2903914 start.go:297] selected driver: docker
I0923 11:35:49.744427 2903914 start.go:901] validating driver "docker" against <nil>
I0923 11:35:49.744443 2903914 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0923 11:35:49.745066 2903914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0923 11:35:49.807015 2903914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 11:35:49.798069613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I0923 11:35:49.807243 2903914 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0923 11:35:49.807490 2903914 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0923 11:35:49.810585 2903914 out.go:177] * Using Docker driver with root privileges
I0923 11:35:49.812100 2903914 cni.go:84] Creating CNI manager for ""
I0923 11:35:49.812180 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0923 11:35:49.812195 2903914 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0923 11:35:49.812295 2903914 start.go:340] cluster config:
{Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 11:35:49.814088 2903914 out.go:177] * Starting "addons-348379" primary control-plane node in "addons-348379" cluster
I0923 11:35:49.815629 2903914 cache.go:121] Beginning downloading kic base image for docker with containerd
I0923 11:35:49.817311 2903914 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
I0923 11:35:49.818975 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I0923 11:35:49.819031 2903914 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
I0923 11:35:49.819044 2903914 cache.go:56] Caching tarball of preloaded images
I0923 11:35:49.819072 2903914 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
I0923 11:35:49.819129 2903914 preload.go:172] Found /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0923 11:35:49.819140 2903914 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
I0923 11:35:49.819629 2903914 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json ...
I0923 11:35:49.819663 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json: {Name:mk57bf6c9d1a024b95a9182333fb0e843fbdc049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 11:35:49.834226 2903914 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
I0923 11:35:49.834349 2903914 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
I0923 11:35:49.834370 2903914 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
I0923 11:35:49.834376 2903914 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
I0923 11:35:49.834383 2903914 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
I0923 11:35:49.834388 2903914 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
I0923 11:36:07.477009 2903914 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
I0923 11:36:07.477055 2903914 cache.go:194] Successfully downloaded all kic artifacts
I0923 11:36:07.477102 2903914 start.go:360] acquireMachinesLock for addons-348379: {Name:mk0afc734c4276635047574670b52ff1624a597d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 11:36:07.477241 2903914 start.go:364] duration metric: took 114.625µs to acquireMachinesLock for "addons-348379"
I0923 11:36:07.477273 2903914 start.go:93] Provisioning new machine with config: &{Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0923 11:36:07.477361 2903914 start.go:125] createHost starting for "" (driver="docker")
I0923 11:36:07.479499 2903914 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0923 11:36:07.479768 2903914 start.go:159] libmachine.API.Create for "addons-348379" (driver="docker")
I0923 11:36:07.479806 2903914 client.go:168] LocalClient.Create starting
I0923 11:36:07.479934 2903914 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem
I0923 11:36:07.656758 2903914 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem
I0923 11:36:07.895005 2903914 cli_runner.go:164] Run: docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0923 11:36:07.911037 2903914 cli_runner.go:211] docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0923 11:36:07.911143 2903914 network_create.go:284] running [docker network inspect addons-348379] to gather additional debugging logs...
I0923 11:36:07.911165 2903914 cli_runner.go:164] Run: docker network inspect addons-348379
W0923 11:36:07.926743 2903914 cli_runner.go:211] docker network inspect addons-348379 returned with exit code 1
I0923 11:36:07.926792 2903914 network_create.go:287] error running [docker network inspect addons-348379]: docker network inspect addons-348379: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-348379 not found
I0923 11:36:07.926806 2903914 network_create.go:289] output of [docker network inspect addons-348379]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-348379 not found
** /stderr **
I0923 11:36:07.926904 2903914 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 11:36:07.941390 2903914 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3eb90}
I0923 11:36:07.941437 2903914 network_create.go:124] attempt to create docker network addons-348379 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0923 11:36:07.941499 2903914 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-348379 addons-348379
I0923 11:36:08.013253 2903914 network_create.go:108] docker network addons-348379 192.168.49.0/24 created
I0923 11:36:08.013292 2903914 kic.go:121] calculated static IP "192.168.49.2" for the "addons-348379" container
I0923 11:36:08.013374 2903914 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0923 11:36:08.030080 2903914 cli_runner.go:164] Run: docker volume create addons-348379 --label name.minikube.sigs.k8s.io=addons-348379 --label created_by.minikube.sigs.k8s.io=true
I0923 11:36:08.048023 2903914 oci.go:103] Successfully created a docker volume addons-348379
I0923 11:36:08.048128 2903914 cli_runner.go:164] Run: docker run --rm --name addons-348379-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --entrypoint /usr/bin/test -v addons-348379:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
I0923 11:36:10.056258 2903914 cli_runner.go:217] Completed: docker run --rm --name addons-348379-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --entrypoint /usr/bin/test -v addons-348379:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.008084592s)
I0923 11:36:10.056295 2903914 oci.go:107] Successfully prepared a docker volume addons-348379
I0923 11:36:10.056323 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I0923 11:36:10.056345 2903914 kic.go:194] Starting extracting preloaded images to volume ...
I0923 11:36:10.056440 2903914 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-348379:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
I0923 11:36:14.019670 2903914 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-348379:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.963182399s)
I0923 11:36:14.019706 2903914 kic.go:203] duration metric: took 3.963357873s to extract preloaded images to volume ...
W0923 11:36:14.019879 2903914 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0923 11:36:14.020008 2903914 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0923 11:36:14.071499 2903914 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-348379 --name addons-348379 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-348379 --network addons-348379 --ip 192.168.49.2 --volume addons-348379:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
I0923 11:36:14.406562 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Running}}
I0923 11:36:14.430276 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:14.456577 2903914 cli_runner.go:164] Run: docker exec addons-348379 stat /var/lib/dpkg/alternatives/iptables
I0923 11:36:14.524343 2903914 oci.go:144] the created container "addons-348379" has a running status.
I0923 11:36:14.524373 2903914 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa...
I0923 11:36:14.817157 2903914 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0923 11:36:14.839902 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:14.872963 2903914 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0923 11:36:14.872990 2903914 kic_runner.go:114] Args: [docker exec --privileged addons-348379 chown docker:docker /home/docker/.ssh/authorized_keys]
I0923 11:36:14.951457 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:14.976890 2903914 machine.go:93] provisionDockerMachine start ...
I0923 11:36:14.977005 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:15.007610 2903914 main.go:141] libmachine: Using SSH client type: native
I0923 11:36:15.007901 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 41792 <nil> <nil>}
I0923 11:36:15.007913 2903914 main.go:141] libmachine: About to run SSH command:
hostname
I0923 11:36:15.203560 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-348379
I0923 11:36:15.203583 2903914 ubuntu.go:169] provisioning hostname "addons-348379"
I0923 11:36:15.203659 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:15.229660 2903914 main.go:141] libmachine: Using SSH client type: native
I0923 11:36:15.229941 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 41792 <nil> <nil>}
I0923 11:36:15.229961 2903914 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-348379 && echo "addons-348379" | sudo tee /etc/hostname
I0923 11:36:15.387302 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-348379
I0923 11:36:15.387387 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:15.409587 2903914 main.go:141] libmachine: Using SSH client type: native
I0923 11:36:15.409829 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 41792 <nil> <nil>}
I0923 11:36:15.409846 2903914 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-348379' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-348379/g' /etc/hosts;
else
echo '127.0.1.1 addons-348379' | sudo tee -a /etc/hosts;
fi
fi
I0923 11:36:15.552128 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0923 11:36:15.552218 2903914 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19688-2897765/.minikube CaCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19688-2897765/.minikube}
I0923 11:36:15.552276 2903914 ubuntu.go:177] setting up certificates
I0923 11:36:15.552305 2903914 provision.go:84] configureAuth start
I0923 11:36:15.552432 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
I0923 11:36:15.571035 2903914 provision.go:143] copyHostCerts
I0923 11:36:15.571118 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.pem (1078 bytes)
I0923 11:36:15.571374 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/cert.pem (1123 bytes)
I0923 11:36:15.571463 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/key.pem (1675 bytes)
I0923 11:36:15.571520 2903914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem org=jenkins.addons-348379 san=[127.0.0.1 192.168.49.2 addons-348379 localhost minikube]
I0923 11:36:15.936111 2903914 provision.go:177] copyRemoteCerts
I0923 11:36:15.936188 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0923 11:36:15.936230 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:15.954080 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:16.048521 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0923 11:36:16.073032 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0923 11:36:16.096585 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0923 11:36:16.120809 2903914 provision.go:87] duration metric: took 568.476502ms to configureAuth
I0923 11:36:16.120878 2903914 ubuntu.go:193] setting minikube options for container-runtime
I0923 11:36:16.121066 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 11:36:16.121075 2903914 machine.go:96] duration metric: took 1.144158288s to provisionDockerMachine
I0923 11:36:16.121082 2903914 client.go:171] duration metric: took 8.641266117s to LocalClient.Create
I0923 11:36:16.121105 2903914 start.go:167] duration metric: took 8.641338888s to libmachine.API.Create "addons-348379"
I0923 11:36:16.121117 2903914 start.go:293] postStartSetup for "addons-348379" (driver="docker")
I0923 11:36:16.121127 2903914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0923 11:36:16.121180 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0923 11:36:16.121219 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:16.140164 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:16.237274 2903914 ssh_runner.go:195] Run: cat /etc/os-release
I0923 11:36:16.240688 2903914 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0923 11:36:16.240726 2903914 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0923 11:36:16.240751 2903914 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0923 11:36:16.240759 2903914 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0923 11:36:16.240772 2903914 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-2897765/.minikube/addons for local assets ...
I0923 11:36:16.240845 2903914 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-2897765/.minikube/files for local assets ...
I0923 11:36:16.240872 2903914 start.go:296] duration metric: took 119.748923ms for postStartSetup
I0923 11:36:16.241197 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
I0923 11:36:16.257321 2903914 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json ...
I0923 11:36:16.257608 2903914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0923 11:36:16.257659 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:16.273475 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:16.364420 2903914 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0923 11:36:16.369417 2903914 start.go:128] duration metric: took 8.892040374s to createHost
I0923 11:36:16.369444 2903914 start.go:83] releasing machines lock for "addons-348379", held for 8.892189913s
I0923 11:36:16.369525 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
I0923 11:36:16.386496 2903914 ssh_runner.go:195] Run: cat /version.json
I0923 11:36:16.386558 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:16.386844 2903914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0923 11:36:16.386924 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:16.402767 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:16.407504 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:16.620856 2903914 ssh_runner.go:195] Run: systemctl --version
I0923 11:36:16.625351 2903914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0923 11:36:16.629494 2903914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0923 11:36:16.656196 2903914 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0923 11:36:16.656273 2903914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0923 11:36:16.685634 2903914 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0923 11:36:16.685658 2903914 start.go:495] detecting cgroup driver to use...
I0923 11:36:16.685694 2903914 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0923 11:36:16.685752 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0923 11:36:16.698438 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0923 11:36:16.709958 2903914 docker.go:217] disabling cri-docker service (if available) ...
I0923 11:36:16.710048 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0923 11:36:16.723912 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0923 11:36:16.738695 2903914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0923 11:36:16.833978 2903914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0923 11:36:16.926860 2903914 docker.go:233] disabling docker service ...
I0923 11:36:16.926964 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0923 11:36:16.947106 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0923 11:36:16.959548 2903914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0923 11:36:17.053558 2903914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0923 11:36:17.135034 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0923 11:36:17.146613 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0923 11:36:17.163902 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0923 11:36:17.174292 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0923 11:36:17.185041 2903914 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0923 11:36:17.185112 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0923 11:36:17.195986 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0923 11:36:17.206180 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0923 11:36:17.217391 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0923 11:36:17.228075 2903914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0923 11:36:17.237476 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0923 11:36:17.247362 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0923 11:36:17.257646 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0923 11:36:17.267821 2903914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0923 11:36:17.276866 2903914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0923 11:36:17.286512 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 11:36:17.359977 2903914 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0923 11:36:17.486930 2903914 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0923 11:36:17.487093 2903914 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0923 11:36:17.490604 2903914 start.go:563] Will wait 60s for crictl version
I0923 11:36:17.490709 2903914 ssh_runner.go:195] Run: which crictl
I0923 11:36:17.494017 2903914 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0923 11:36:17.529914 2903914 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.22
RuntimeApiVersion: v1
I0923 11:36:17.530001 2903914 ssh_runner.go:195] Run: containerd --version
I0923 11:36:17.553062 2903914 ssh_runner.go:195] Run: containerd --version
I0923 11:36:17.581066 2903914 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
I0923 11:36:17.583092 2903914 cli_runner.go:164] Run: docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 11:36:17.598970 2903914 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0923 11:36:17.602709 2903914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0923 11:36:17.616517 2903914 kubeadm.go:883] updating cluster {Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0923 11:36:17.616637 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I0923 11:36:17.616705 2903914 ssh_runner.go:195] Run: sudo crictl images --output json
I0923 11:36:17.653561 2903914 containerd.go:627] all images are preloaded for containerd runtime.
I0923 11:36:17.653588 2903914 containerd.go:534] Images already preloaded, skipping extraction
I0923 11:36:17.653654 2903914 ssh_runner.go:195] Run: sudo crictl images --output json
I0923 11:36:17.689043 2903914 containerd.go:627] all images are preloaded for containerd runtime.
I0923 11:36:17.689069 2903914 cache_images.go:84] Images are preloaded, skipping loading
I0923 11:36:17.689077 2903914 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
I0923 11:36:17.689170 2903914 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-348379 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0923 11:36:17.689240 2903914 ssh_runner.go:195] Run: sudo crictl info
I0923 11:36:17.725180 2903914 cni.go:84] Creating CNI manager for ""
I0923 11:36:17.725207 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0923 11:36:17.725219 2903914 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0923 11:36:17.725244 2903914 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-348379 NodeName:addons-348379 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0923 11:36:17.725401 2903914 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "addons-348379"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0923 11:36:17.725481 2903914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0923 11:36:17.734617 2903914 binaries.go:44] Found k8s binaries, skipping transfer
I0923 11:36:17.734696 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0923 11:36:17.743298 2903914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
I0923 11:36:17.761798 2903914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0923 11:36:17.779190 2903914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
I0923 11:36:17.797282 2903914 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0923 11:36:17.801431 2903914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0923 11:36:17.813470 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 11:36:17.904297 2903914 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0923 11:36:17.918201 2903914 certs.go:68] Setting up /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379 for IP: 192.168.49.2
I0923 11:36:17.918278 2903914 certs.go:194] generating shared ca certs ...
I0923 11:36:17.918311 2903914 certs.go:226] acquiring lock for ca certs: {Name:mk3307686e47e832a4d12b60b03ff3c8ff918f47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 11:36:17.918478 2903914 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key
I0923 11:36:18.402482 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt ...
I0923 11:36:18.402521 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt: {Name:mka24ad8ce2563bd38493ad3048e3b202e9928cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 11:36:18.403346 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key ...
I0923 11:36:18.403367 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key: {Name:mke301cd867e18ebea9d875f8c02fb489d6a0a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 11:36:18.404035 2903914 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key
I0923 11:36:18.591619 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt ...
I0923 11:36:18.591652 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt: {Name:mkf29e9cf8d545d0d33d0ce8b9548c24a316f1e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 11:36:18.591849 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key ...
I0923 11:36:18.591862 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key: {Name:mk38a5ba119b442a98d5a1991cd20b7dc11fb378 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 11:36:18.592490 2903914 certs.go:256] generating profile certs ...
I0923 11:36:18.592562 2903914 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key
I0923 11:36:18.592583 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt with IP's: []
I0923 11:36:18.906095 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt ...
I0923 11:36:18.906134 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: {Name:mk33102a3556c59cf025437aacb3628bfa41ed3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 11:36:18.906340 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key ...
I0923 11:36:18.906354 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key: {Name:mk1894a7120f896161f07a459fec6eb4fe11e236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 11:36:18.906997 2903914 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb
I0923 11:36:18.907023 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0923 11:36:19.176533 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb ...
I0923 11:36:19.176571 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb: {Name:mkf3902710f18b86666bddc46eb9d246a2fd9230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 11:36:19.177433 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb ...
I0923 11:36:19.177458 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb: {Name:mkcc729dc3699800a37a33c607924c19bb2a2d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 11:36:19.177602 2903914 certs.go:381] copying /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb -> /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt
I0923 11:36:19.177687 2903914 certs.go:385] copying /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb -> /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key
I0923 11:36:19.177743 2903914 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key
I0923 11:36:19.177760 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt with IP's: []
I0923 11:36:19.407564 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt ...
I0923 11:36:19.407592 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt: {Name:mk60cabd91332996a9c3d4f42fab2e735667c2da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 11:36:19.408332 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key ...
I0923 11:36:19.408353 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key: {Name:mk59bc62a8c0559971fc4c2dcb7a472d97d949c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 11:36:19.408565 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem (1679 bytes)
I0923 11:36:19.408610 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem (1078 bytes)
I0923 11:36:19.408642 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem (1123 bytes)
I0923 11:36:19.408675 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem (1675 bytes)
I0923 11:36:19.409266 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0923 11:36:19.438812 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0923 11:36:19.463399 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0923 11:36:19.487765 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0923 11:36:19.512554 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0923 11:36:19.537552 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0923 11:36:19.562531 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0923 11:36:19.587389 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0923 11:36:19.612515 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0923 11:36:19.638755 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0923 11:36:19.657011 2903914 ssh_runner.go:195] Run: openssl version
I0923 11:36:19.662616 2903914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0923 11:36:19.672363 2903914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0923 11:36:19.675882 2903914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:36 /usr/share/ca-certificates/minikubeCA.pem
I0923 11:36:19.675950 2903914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0923 11:36:19.682948 2903914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0923 11:36:19.692316 2903914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0923 11:36:19.695950 2903914 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0923 11:36:19.695999 2903914 kubeadm.go:392] StartCluster: {Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 11:36:19.696080 2903914 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0923 11:36:19.696143 2903914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0923 11:36:19.737759 2903914 cri.go:89] found id: ""
I0923 11:36:19.737855 2903914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0923 11:36:19.746943 2903914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0923 11:36:19.756075 2903914 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0923 11:36:19.756179 2903914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0923 11:36:19.767199 2903914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0923 11:36:19.767221 2903914 kubeadm.go:157] found existing configuration files:
I0923 11:36:19.767362 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0923 11:36:19.776658 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0923 11:36:19.776748 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0923 11:36:19.785348 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0923 11:36:19.794057 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0923 11:36:19.794157 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0923 11:36:19.803152 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0923 11:36:19.812476 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0923 11:36:19.812574 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0923 11:36:19.821847 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0923 11:36:19.830557 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0923 11:36:19.830648 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0923 11:36:19.839343 2903914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0923 11:36:19.882483 2903914 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0923 11:36:19.882775 2903914 kubeadm.go:310] [preflight] Running pre-flight checks
I0923 11:36:19.901384 2903914 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0923 11:36:19.901507 2903914 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1070-aws[0m
I0923 11:36:19.901565 2903914 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0923 11:36:19.901637 2903914 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0923 11:36:19.901714 2903914 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0923 11:36:19.901776 2903914 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0923 11:36:19.901867 2903914 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0923 11:36:19.901970 2903914 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0923 11:36:19.902052 2903914 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0923 11:36:19.902117 2903914 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0923 11:36:19.902212 2903914 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0923 11:36:19.902294 2903914 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0923 11:36:19.967219 2903914 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0923 11:36:19.967380 2903914 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0923 11:36:19.967473 2903914 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0923 11:36:19.973209 2903914 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0923 11:36:19.975734 2903914 out.go:235] - Generating certificates and keys ...
I0923 11:36:19.975831 2903914 kubeadm.go:310] [certs] Using existing ca certificate authority
I0923 11:36:19.975904 2903914 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0923 11:36:20.179743 2903914 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0923 11:36:21.047813 2903914 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0923 11:36:21.298950 2903914 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0923 11:36:22.135071 2903914 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0923 11:36:22.608576 2903914 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0923 11:36:22.609029 2903914 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-348379 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0923 11:36:22.891026 2903914 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0923 11:36:22.891409 2903914 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-348379 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0923 11:36:23.205606 2903914 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0923 11:36:23.489426 2903914 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0923 11:36:23.714238 2903914 kubeadm.go:310] [certs] Generating "sa" key and public key
I0923 11:36:23.714637 2903914 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0923 11:36:23.917484 2903914 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0923 11:36:24.438330 2903914 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0923 11:36:24.759712 2903914 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0923 11:36:25.033943 2903914 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0923 11:36:25.695483 2903914 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0923 11:36:25.696139 2903914 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0923 11:36:25.699152 2903914 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0923 11:36:25.701227 2903914 out.go:235] - Booting up control plane ...
I0923 11:36:25.701329 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0923 11:36:25.702980 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0923 11:36:25.704104 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0923 11:36:25.714874 2903914 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0923 11:36:25.721041 2903914 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0923 11:36:25.721278 2903914 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0923 11:36:25.822291 2903914 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0923 11:36:25.822414 2903914 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0923 11:36:26.325513 2903914 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.540451ms
I0923 11:36:26.325611 2903914 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0923 11:36:32.327784 2903914 kubeadm.go:310] [api-check] The API server is healthy after 6.002254671s
I0923 11:36:32.349741 2903914 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0923 11:36:32.363473 2903914 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0923 11:36:32.390246 2903914 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0923 11:36:32.390456 2903914 kubeadm.go:310] [mark-control-plane] Marking the node addons-348379 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0923 11:36:32.404726 2903914 kubeadm.go:310] [bootstrap-token] Using token: 9jvvlf.nkkd2cu2r67rq0id
I0923 11:36:32.407710 2903914 out.go:235] - Configuring RBAC rules ...
I0923 11:36:32.407927 2903914 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0923 11:36:32.415924 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0923 11:36:32.424675 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0923 11:36:32.430654 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0923 11:36:32.435150 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0923 11:36:32.439236 2903914 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0923 11:36:32.750475 2903914 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0923 11:36:33.169716 2903914 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0923 11:36:33.734975 2903914 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0923 11:36:33.736330 2903914 kubeadm.go:310]
I0923 11:36:33.736408 2903914 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0923 11:36:33.736419 2903914 kubeadm.go:310]
I0923 11:36:33.736495 2903914 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0923 11:36:33.736505 2903914 kubeadm.go:310]
I0923 11:36:33.736531 2903914 kubeadm.go:310] mkdir -p $HOME/.kube
I0923 11:36:33.736593 2903914 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0923 11:36:33.736647 2903914 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0923 11:36:33.736656 2903914 kubeadm.go:310]
I0923 11:36:33.736710 2903914 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0923 11:36:33.736719 2903914 kubeadm.go:310]
I0923 11:36:33.736766 2903914 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0923 11:36:33.736774 2903914 kubeadm.go:310]
I0923 11:36:33.736827 2903914 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0923 11:36:33.736907 2903914 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0923 11:36:33.736980 2903914 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0923 11:36:33.736989 2903914 kubeadm.go:310]
I0923 11:36:33.737074 2903914 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0923 11:36:33.737166 2903914 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0923 11:36:33.737175 2903914 kubeadm.go:310]
I0923 11:36:33.737258 2903914 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9jvvlf.nkkd2cu2r67rq0id \
I0923 11:36:33.737363 2903914 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:ac02d2aa360d89bf33102c2cc3695edceca639012bc17730f4b3249cca9bef37 \
I0923 11:36:33.737389 2903914 kubeadm.go:310] --control-plane
I0923 11:36:33.737397 2903914 kubeadm.go:310]
I0923 11:36:33.737482 2903914 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0923 11:36:33.737497 2903914 kubeadm.go:310]
I0923 11:36:33.737577 2903914 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9jvvlf.nkkd2cu2r67rq0id \
I0923 11:36:33.737677 2903914 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:ac02d2aa360d89bf33102c2cc3695edceca639012bc17730f4b3249cca9bef37
I0923 11:36:33.741014 2903914 kubeadm.go:310] W0923 11:36:19.879102 1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0923 11:36:33.741352 2903914 kubeadm.go:310] W0923 11:36:19.880019 1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0923 11:36:33.741583 2903914 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
I0923 11:36:33.741697 2903914 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0923 11:36:33.741739 2903914 cni.go:84] Creating CNI manager for ""
I0923 11:36:33.741754 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0923 11:36:33.744827 2903914 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0923 11:36:33.747515 2903914 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0923 11:36:33.751591 2903914 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
I0923 11:36:33.751612 2903914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
I0923 11:36:33.770489 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0923 11:36:34.059421 2903914 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0923 11:36:34.059507 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 11:36:34.059553 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-348379 minikube.k8s.io/updated_at=2024_09_23T11_36_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e minikube.k8s.io/name=addons-348379 minikube.k8s.io/primary=true
I0923 11:36:34.223427 2903914 ops.go:34] apiserver oom_adj: -16
I0923 11:36:34.223613 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 11:36:34.724103 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 11:36:35.223668 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 11:36:35.724171 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 11:36:36.224392 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 11:36:36.724281 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 11:36:37.223628 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 11:36:37.724422 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 11:36:38.224601 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 11:36:38.335390 2903914 kubeadm.go:1113] duration metric: took 4.275959417s to wait for elevateKubeSystemPrivileges
I0923 11:36:38.335426 2903914 kubeadm.go:394] duration metric: took 18.639429725s to StartCluster
I0923 11:36:38.335446 2903914 settings.go:142] acquiring lock: {Name:mk4415211fc0f47c243959f36c7d2f9eeca37653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 11:36:38.336106 2903914 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19688-2897765/kubeconfig
I0923 11:36:38.336533 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/kubeconfig: {Name:mkc814324ebd7e6787446f1c0db099ab6daa7ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 11:36:38.336743 2903914 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0923 11:36:38.336888 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0923 11:36:38.337142 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 11:36:38.337173 2903914 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0923 11:36:38.337257 2903914 addons.go:69] Setting yakd=true in profile "addons-348379"
I0923 11:36:38.337273 2903914 addons.go:234] Setting addon yakd=true in "addons-348379"
I0923 11:36:38.337299 2903914 host.go:66] Checking if "addons-348379" exists ...
I0923 11:36:38.337814 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:38.338086 2903914 addons.go:69] Setting inspektor-gadget=true in profile "addons-348379"
I0923 11:36:38.338109 2903914 addons.go:234] Setting addon inspektor-gadget=true in "addons-348379"
I0923 11:36:38.338133 2903914 host.go:66] Checking if "addons-348379" exists ...
I0923 11:36:38.338208 2903914 addons.go:69] Setting metrics-server=true in profile "addons-348379"
I0923 11:36:38.338232 2903914 addons.go:234] Setting addon metrics-server=true in "addons-348379"
I0923 11:36:38.338262 2903914 host.go:66] Checking if "addons-348379" exists ...
I0923 11:36:38.338580 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:38.338757 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:38.339010 2903914 addons.go:69] Setting cloud-spanner=true in profile "addons-348379"
I0923 11:36:38.339029 2903914 addons.go:234] Setting addon cloud-spanner=true in "addons-348379"
I0923 11:36:38.339055 2903914 host.go:66] Checking if "addons-348379" exists ...
I0923 11:36:38.339531 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:38.343469 2903914 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-348379"
I0923 11:36:38.343503 2903914 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-348379"
I0923 11:36:38.345984 2903914 out.go:177] * Verifying Kubernetes components...
I0923 11:36:38.346027 2903914 host.go:66] Checking if "addons-348379" exists ...
I0923 11:36:38.346500 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:38.348537 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 11:36:38.345645 2903914 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-348379"
I0923 11:36:38.365526 2903914 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-348379"
I0923 11:36:38.365567 2903914 host.go:66] Checking if "addons-348379" exists ...
I0923 11:36:38.366036 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:38.345659 2903914 addons.go:69] Setting default-storageclass=true in profile "addons-348379"
I0923 11:36:38.373074 2903914 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-348379"
I0923 11:36:38.373503 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:38.345667 2903914 addons.go:69] Setting gcp-auth=true in profile "addons-348379"
I0923 11:36:38.384131 2903914 mustload.go:65] Loading cluster: addons-348379
I0923 11:36:38.384331 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 11:36:38.384582 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:38.345675 2903914 addons.go:69] Setting ingress=true in profile "addons-348379"
I0923 11:36:38.394024 2903914 addons.go:234] Setting addon ingress=true in "addons-348379"
I0923 11:36:38.394117 2903914 host.go:66] Checking if "addons-348379" exists ...
I0923 11:36:38.345679 2903914 addons.go:69] Setting ingress-dns=true in profile "addons-348379"
I0923 11:36:38.401504 2903914 addons.go:234] Setting addon ingress-dns=true in "addons-348379"
I0923 11:36:38.404456 2903914 host.go:66] Checking if "addons-348379" exists ...
I0923 11:36:38.405051 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:38.345945 2903914 addons.go:69] Setting registry=true in profile "addons-348379"
I0923 11:36:38.410134 2903914 addons.go:234] Setting addon registry=true in "addons-348379"
I0923 11:36:38.410210 2903914 host.go:66] Checking if "addons-348379" exists ...
I0923 11:36:38.411036 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:38.345953 2903914 addons.go:69] Setting storage-provisioner=true in profile "addons-348379"
I0923 11:36:38.345956 2903914 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-348379"
I0923 11:36:38.345960 2903914 addons.go:69] Setting volcano=true in profile "addons-348379"
I0923 11:36:38.345964 2903914 addons.go:69] Setting volumesnapshots=true in profile "addons-348379"
I0923 11:36:38.411256 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:38.454084 2903914 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0923 11:36:38.460897 2903914 addons.go:234] Setting addon storage-provisioner=true in "addons-348379"
I0923 11:36:38.460953 2903914 host.go:66] Checking if "addons-348379" exists ...
I0923 11:36:38.461441 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:38.478144 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0923 11:36:38.478222 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0923 11:36:38.478322 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:38.487269 2903914 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-348379"
I0923 11:36:38.487639 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:38.517904 2903914 addons.go:234] Setting addon volcano=true in "addons-348379"
I0923 11:36:38.517966 2903914 host.go:66] Checking if "addons-348379" exists ...
I0923 11:36:38.518467 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:38.546347 2903914 addons.go:234] Setting addon volumesnapshots=true in "addons-348379"
I0923 11:36:38.546411 2903914 host.go:66] Checking if "addons-348379" exists ...
I0923 11:36:38.546987 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:38.575673 2903914 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0923 11:36:38.575914 2903914 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
I0923 11:36:38.579125 2903914 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0923 11:36:38.579150 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0923 11:36:38.579221 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:38.579592 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0923 11:36:38.579639 2903914 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0923 11:36:38.579689 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:38.617860 2903914 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0923 11:36:38.622150 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0923 11:36:38.622177 2903914 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0923 11:36:38.622251 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:38.622735 2903914 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0923 11:36:38.624431 2903914 addons.go:234] Setting addon default-storageclass=true in "addons-348379"
I0923 11:36:38.624466 2903914 host.go:66] Checking if "addons-348379" exists ...
I0923 11:36:38.628278 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:38.632340 2903914 host.go:66] Checking if "addons-348379" exists ...
I0923 11:36:38.636671 2903914 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0923 11:36:38.637544 2903914 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0923 11:36:38.637561 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0923 11:36:38.637622 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:38.655509 2903914 out.go:177] - Using image docker.io/registry:2.8.3
I0923 11:36:38.662366 2903914 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0923 11:36:38.665082 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0923 11:36:38.665107 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0923 11:36:38.665177 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:38.670015 2903914 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0923 11:36:38.676295 2903914 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0923 11:36:38.676385 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0923 11:36:38.676464 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:38.683562 2903914 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0923 11:36:38.686396 2903914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0923 11:36:38.686419 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0923 11:36:38.686489 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:38.712304 2903914 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-348379"
I0923 11:36:38.712346 2903914 host.go:66] Checking if "addons-348379" exists ...
I0923 11:36:38.712766 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:38.727410 2903914 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0923 11:36:38.730115 2903914 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0923 11:36:38.735471 2903914 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
I0923 11:36:38.735719 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:38.757102 2903914 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0923 11:36:38.760000 2903914 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0923 11:36:38.766393 2903914 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0923 11:36:38.766549 2903914 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0923 11:36:38.769513 2903914 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0923 11:36:38.771099 2903914 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0923 11:36:38.774083 2903914 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0923 11:36:38.774459 2903914 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0923 11:36:38.774507 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0923 11:36:38.774615 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:38.780134 2903914 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
I0923 11:36:38.788013 2903914 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0923 11:36:38.790672 2903914 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0923 11:36:38.794853 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0923 11:36:38.794879 2903914 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0923 11:36:38.794943 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:38.799096 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0923 11:36:38.799171 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0923 11:36:38.799267 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:38.815113 2903914 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
I0923 11:36:38.821525 2903914 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0923 11:36:38.821606 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471865 bytes)
I0923 11:36:38.822703 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:38.827569 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:38.850382 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:38.850811 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:38.867021 2903914 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0923 11:36:38.867042 2903914 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0923 11:36:38.867113 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:38.883391 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:38.898738 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:38.922806 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:38.939666 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:38.947337 2903914 out.go:177] - Using image docker.io/busybox:stable
I0923 11:36:38.950063 2903914 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0923 11:36:38.952600 2903914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0923 11:36:38.952625 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0923 11:36:38.952695 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:38.965723 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:38.981816 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:38.987132 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:38.991897 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
W0923 11:36:39.007637 2903914 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0923 11:36:39.007696 2903914 retry.go:31] will retry after 239.86918ms: ssh: handshake failed: EOF
W0923 11:36:39.007745 2903914 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0923 11:36:39.007761 2903914 retry.go:31] will retry after 150.66552ms: ssh: handshake failed: EOF
I0923 11:36:39.023482 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:39.023945 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:39.169972 2903914 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0923 11:36:39.170259 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0923 11:36:39.494971 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0923 11:36:39.556492 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0923 11:36:39.608428 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0923 11:36:39.623138 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0923 11:36:39.623206 2903914 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0923 11:36:39.709728 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0923 11:36:39.709758 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0923 11:36:39.777118 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0923 11:36:39.827433 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0923 11:36:39.827460 2903914 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0923 11:36:39.863119 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0923 11:36:39.870165 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0923 11:36:39.870234 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0923 11:36:39.883044 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0923 11:36:39.890345 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0923 11:36:39.896655 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0923 11:36:39.896731 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0923 11:36:39.967387 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0923 11:36:39.967475 2903914 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0923 11:36:40.018137 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0923 11:36:40.018231 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0923 11:36:40.030206 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0923 11:36:40.030287 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0923 11:36:40.050639 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0923 11:36:40.058613 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0923 11:36:40.058708 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0923 11:36:40.150893 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0923 11:36:40.151005 2903914 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0923 11:36:40.242616 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0923 11:36:40.242710 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0923 11:36:40.287048 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0923 11:36:40.287084 2903914 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0923 11:36:40.486475 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0923 11:36:40.486504 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0923 11:36:40.493831 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0923 11:36:40.494920 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0923 11:36:40.494943 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0923 11:36:40.557986 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0923 11:36:40.558018 2903914 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0923 11:36:40.619083 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0923 11:36:40.619108 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0923 11:36:40.657629 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0923 11:36:40.657660 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0923 11:36:40.734595 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0923 11:36:40.734624 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0923 11:36:40.801552 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0923 11:36:40.801595 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0923 11:36:40.865890 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0923 11:36:40.930189 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0923 11:36:40.930231 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0923 11:36:40.951956 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0923 11:36:40.951998 2903914 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0923 11:36:40.961846 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0923 11:36:40.961876 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0923 11:36:40.972100 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0923 11:36:41.216224 2903914 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 11:36:41.216250 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0923 11:36:41.260577 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0923 11:36:41.260621 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0923 11:36:41.278445 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 11:36:41.294750 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0923 11:36:41.294791 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0923 11:36:41.323456 2903914 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.153148456s)
I0923 11:36:41.323525 2903914 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0923 11:36:41.323500 2903914 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.153503843s)
I0923 11:36:41.324480 2903914 node_ready.go:35] waiting up to 6m0s for node "addons-348379" to be "Ready" ...
I0923 11:36:41.330122 2903914 node_ready.go:49] node "addons-348379" has status "Ready":"True"
I0923 11:36:41.330152 2903914 node_ready.go:38] duration metric: took 5.643428ms for node "addons-348379" to be "Ready" ...
I0923 11:36:41.330163 2903914 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0923 11:36:41.339728 2903914 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace to be "Ready" ...
I0923 11:36:41.543987 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0923 11:36:41.544058 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0923 11:36:41.648713 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0923 11:36:41.648741 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0923 11:36:41.746379 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0923 11:36:41.746407 2903914 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0923 11:36:41.828842 2903914 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-348379" context rescaled to 1 replicas
I0923 11:36:41.902914 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0923 11:36:41.921919 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.42690557s)
I0923 11:36:41.921979 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.365461937s)
I0923 11:36:41.922006 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.313555468s)
I0923 11:36:42.029240 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0923 11:36:42.029272 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0923 11:36:42.347806 2903914 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g6mtd" not found
I0923 11:36:42.347883 2903914 pod_ready.go:82] duration metric: took 1.008114432s for pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace to be "Ready" ...
E0923 11:36:42.347920 2903914 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g6mtd" not found
I0923 11:36:42.347968 2903914 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace to be "Ready" ...
I0923 11:36:42.549396 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0923 11:36:42.549431 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0923 11:36:42.830524 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0923 11:36:42.830560 2903914 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0923 11:36:43.211480 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0923 11:36:44.381316 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
I0923 11:36:44.884644 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.107485626s)
I0923 11:36:45.843349 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0923 11:36:45.843447 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:45.872465 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:46.470866 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0923 11:36:46.702197 2903914 addons.go:234] Setting addon gcp-auth=true in "addons-348379"
I0923 11:36:46.702270 2903914 host.go:66] Checking if "addons-348379" exists ...
I0923 11:36:46.702828 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
I0923 11:36:46.733217 2903914 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0923 11:36:46.733280 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
I0923 11:36:46.775337 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
I0923 11:36:46.854788 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
I0923 11:36:47.501824 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.638626321s)
I0923 11:36:47.501900 2903914 addons.go:475] Verifying addon ingress=true in "addons-348379"
I0923 11:36:47.504030 2903914 out.go:177] * Verifying ingress addon...
I0923 11:36:47.506684 2903914 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0923 11:36:47.511450 2903914 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0923 11:36:47.511576 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:48.013380 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:48.544278 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:48.902915 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
I0923 11:36:49.026130 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:49.515743 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:49.786183 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.903059538s)
I0923 11:36:49.786253 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.89583004s)
I0923 11:36:49.786324 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.735598298s)
I0923 11:36:49.786361 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.292505813s)
I0923 11:36:49.786376 2903914 addons.go:475] Verifying addon registry=true in "addons-348379"
I0923 11:36:49.786562 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.920643746s)
I0923 11:36:49.786579 2903914 addons.go:475] Verifying addon metrics-server=true in "addons-348379"
I0923 11:36:49.786620 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.814490533s)
I0923 11:36:49.786930 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.508442064s)
W0923 11:36:49.786964 2903914 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0923 11:36:49.786984 2903914 retry.go:31] will retry after 231.122068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0923 11:36:49.787069 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.884124931s)
I0923 11:36:49.788510 2903914 out.go:177] * Verifying registry addon...
I0923 11:36:49.789825 2903914 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-348379 service yakd-dashboard -n yakd-dashboard
I0923 11:36:49.793761 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0923 11:36:49.857501 2903914 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0923 11:36:49.857529 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:50.019003 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 11:36:50.091953 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.88041396s)
I0923 11:36:50.091996 2903914 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-348379"
I0923 11:36:50.092188 2903914 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.35893014s)
I0923 11:36:50.094620 2903914 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0923 11:36:50.094707 2903914 out.go:177] * Verifying csi-hostpath-driver addon...
I0923 11:36:50.096428 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:50.099435 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 11:36:50.101290 2903914 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0923 11:36:50.102965 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0923 11:36:50.103035 2903914 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0923 11:36:50.192143 2903914 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 11:36:50.192217 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:50.253203 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0923 11:36:50.253267 2903914 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0923 11:36:50.330971 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:50.356163 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0923 11:36:50.356234 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0923 11:36:50.430758 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0923 11:36:50.512255 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:50.604722 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:50.797707 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:51.023077 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:51.104450 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:51.297470 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:51.356586 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
I0923 11:36:51.511733 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:51.617197 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:51.799992 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:51.872269 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.853212545s)
I0923 11:36:51.872449 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.441621639s)
I0923 11:36:51.875653 2903914 addons.go:475] Verifying addon gcp-auth=true in "addons-348379"
I0923 11:36:51.880701 2903914 out.go:177] * Verifying gcp-auth addon...
I0923 11:36:51.886238 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0923 11:36:51.899994 2903914 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0923 11:36:52.012221 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:52.113517 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:52.299559 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:52.512212 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:52.605423 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:52.799422 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:52.854916 2903914 pod_ready.go:93] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"True"
I0923 11:36:52.854943 2903914 pod_ready.go:82] duration metric: took 10.506940522s for pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace to be "Ready" ...
I0923 11:36:52.854956 2903914 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-348379" in "kube-system" namespace to be "Ready" ...
I0923 11:36:52.861027 2903914 pod_ready.go:93] pod "etcd-addons-348379" in "kube-system" namespace has status "Ready":"True"
I0923 11:36:52.861059 2903914 pod_ready.go:82] duration metric: took 6.063045ms for pod "etcd-addons-348379" in "kube-system" namespace to be "Ready" ...
I0923 11:36:52.861112 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-348379" in "kube-system" namespace to be "Ready" ...
I0923 11:36:52.867900 2903914 pod_ready.go:93] pod "kube-apiserver-addons-348379" in "kube-system" namespace has status "Ready":"True"
I0923 11:36:52.867934 2903914 pod_ready.go:82] duration metric: took 6.806328ms for pod "kube-apiserver-addons-348379" in "kube-system" namespace to be "Ready" ...
I0923 11:36:52.867947 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-348379" in "kube-system" namespace to be "Ready" ...
I0923 11:36:52.875428 2903914 pod_ready.go:93] pod "kube-controller-manager-addons-348379" in "kube-system" namespace has status "Ready":"True"
I0923 11:36:52.875465 2903914 pod_ready.go:82] duration metric: took 7.477644ms for pod "kube-controller-manager-addons-348379" in "kube-system" namespace to be "Ready" ...
I0923 11:36:52.875477 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nqbmm" in "kube-system" namespace to be "Ready" ...
I0923 11:36:52.881487 2903914 pod_ready.go:93] pod "kube-proxy-nqbmm" in "kube-system" namespace has status "Ready":"True"
I0923 11:36:52.881527 2903914 pod_ready.go:82] duration metric: took 6.024203ms for pod "kube-proxy-nqbmm" in "kube-system" namespace to be "Ready" ...
I0923 11:36:52.881558 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-348379" in "kube-system" namespace to be "Ready" ...
I0923 11:36:53.013082 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:53.115359 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:53.261367 2903914 pod_ready.go:93] pod "kube-scheduler-addons-348379" in "kube-system" namespace has status "Ready":"True"
I0923 11:36:53.261440 2903914 pod_ready.go:82] duration metric: took 379.865643ms for pod "kube-scheduler-addons-348379" in "kube-system" namespace to be "Ready" ...
I0923 11:36:53.261468 2903914 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace to be "Ready" ...
I0923 11:36:53.298690 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:53.511926 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:53.605400 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:53.806273 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:54.013143 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:54.105281 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:54.297943 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:54.511688 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:54.604819 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:54.801146 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:55.012597 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:55.104977 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:55.267783 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
I0923 11:36:55.297710 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:55.513110 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:55.604704 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:55.797620 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:56.013461 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:56.105679 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:56.298722 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:56.511968 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:56.607325 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:56.804470 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:57.011592 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:57.104763 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:57.298808 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:57.511851 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:57.612444 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:57.768189 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
I0923 11:36:57.797413 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:58.012279 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:58.104746 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:58.298054 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:58.511885 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:58.604675 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:58.801111 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:59.014365 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:59.115071 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:59.299589 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:36:59.511922 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:36:59.604297 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:36:59.768227 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
I0923 11:36:59.798277 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:00.038178 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:00.105635 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:00.333590 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:00.512357 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:00.604689 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:00.799851 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:01.011394 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:01.105032 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:01.310842 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:01.511596 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:01.605046 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:01.768750 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
I0923 11:37:01.798428 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:02.013963 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:02.113737 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:02.297934 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:02.511024 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:02.605393 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:02.798053 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:03.012529 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:03.104923 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:03.298010 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:03.512557 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:03.604282 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:03.768828 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
I0923 11:37:03.798455 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:04.015940 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:04.104743 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:04.299059 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:04.512309 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:04.606024 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:04.798339 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:05.019138 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:05.104829 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:05.298375 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:05.511973 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:05.604665 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:05.767100 2903914 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"True"
I0923 11:37:05.767130 2903914 pod_ready.go:82] duration metric: took 12.505640545s for pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace to be "Ready" ...
I0923 11:37:05.767142 2903914 pod_ready.go:39] duration metric: took 24.436967089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0923 11:37:05.767156 2903914 api_server.go:52] waiting for apiserver process to appear ...
I0923 11:37:05.767223 2903914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 11:37:05.781985 2903914 api_server.go:72] duration metric: took 27.445202984s to wait for apiserver process to appear ...
I0923 11:37:05.782061 2903914 api_server.go:88] waiting for apiserver healthz status ...
I0923 11:37:05.782092 2903914 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0923 11:37:05.789746 2903914 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0923 11:37:05.790754 2903914 api_server.go:141] control plane version: v1.31.1
I0923 11:37:05.790781 2903914 api_server.go:131] duration metric: took 8.705461ms to wait for apiserver health ...
I0923 11:37:05.790793 2903914 system_pods.go:43] waiting for kube-system pods to appear ...
I0923 11:37:05.800773 2903914 system_pods.go:59] 18 kube-system pods found
I0923 11:37:05.800813 2903914 system_pods.go:61] "coredns-7c65d6cfc9-ppz9h" [df6d7368-6c3b-4b25-8a3f-d869da9706ef] Running
I0923 11:37:05.800824 2903914 system_pods.go:61] "csi-hostpath-attacher-0" [6b1ea063-3b87-466f-a2fd-3dd5701e0462] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0923 11:37:05.800829 2903914 system_pods.go:61] "csi-hostpath-resizer-0" [03a7bf37-3f70-4482-9e07-8da7e44d10f8] Running
I0923 11:37:05.800839 2903914 system_pods.go:61] "csi-hostpathplugin-zdwf8" [0e01d919-bfa0-4762-80e4-151ab70fcb25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0923 11:37:05.800844 2903914 system_pods.go:61] "etcd-addons-348379" [a0b7a23e-a57c-46e6-b1ee-1fd9b9b61f39] Running
I0923 11:37:05.800848 2903914 system_pods.go:61] "kindnet-4kcdh" [9c1486e9-05ee-4dd1-827f-25928ce8bfab] Running
I0923 11:37:05.800852 2903914 system_pods.go:61] "kube-apiserver-addons-348379" [cf08d3aa-855d-4e43-9278-18058aa83802] Running
I0923 11:37:05.800856 2903914 system_pods.go:61] "kube-controller-manager-addons-348379" [2c507394-9458-4700-b37f-ab54a3e3ffd2] Running
I0923 11:37:05.800860 2903914 system_pods.go:61] "kube-ingress-dns-minikube" [5189817c-c6f5-4bcd-9fd7-9867cc0b7a40] Running
I0923 11:37:05.800866 2903914 system_pods.go:61] "kube-proxy-nqbmm" [2feda6b0-b4da-478a-b557-8a5f1559e17c] Running
I0923 11:37:05.800870 2903914 system_pods.go:61] "kube-scheduler-addons-348379" [941de3db-cdcd-4466-aecb-03dda1815396] Running
I0923 11:37:05.800875 2903914 system_pods.go:61] "metrics-server-84c5f94fbc-dgpbq" [7eafdf5f-4ae3-46ac-a5af-965235e8c031] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0923 11:37:05.800884 2903914 system_pods.go:61] "nvidia-device-plugin-daemonset-xqqn9" [ee088a28-253c-4cbb-bd5e-e8798378f50c] Running
I0923 11:37:05.800892 2903914 system_pods.go:61] "registry-66c9cd494c-fhm8g" [e86ab41a-1d3c-4fd0-8e39-126f3b789212] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0923 11:37:05.800908 2903914 system_pods.go:61] "registry-proxy-7qmf5" [172e9514-645d-4b65-8403-0862836b34c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0923 11:37:05.800916 2903914 system_pods.go:61] "snapshot-controller-56fcc65765-d47ng" [7b9ecb53-7d47-4524-b0ec-66629a7adf6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 11:37:05.800924 2903914 system_pods.go:61] "snapshot-controller-56fcc65765-dchr7" [31ede34d-4ccd-4d01-993c-af062382b536] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 11:37:05.800931 2903914 system_pods.go:61] "storage-provisioner" [19a8b37d-4eee-4889-ab52-103cca27383e] Running
I0923 11:37:05.800938 2903914 system_pods.go:74] duration metric: took 10.139277ms to wait for pod list to return data ...
I0923 11:37:05.800948 2903914 default_sa.go:34] waiting for default service account to be created ...
I0923 11:37:05.802797 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:05.803949 2903914 default_sa.go:45] found service account: "default"
I0923 11:37:05.803977 2903914 default_sa.go:55] duration metric: took 3.018472ms for default service account to be created ...
I0923 11:37:05.803986 2903914 system_pods.go:116] waiting for k8s-apps to be running ...
I0923 11:37:05.813725 2903914 system_pods.go:86] 18 kube-system pods found
I0923 11:37:05.813761 2903914 system_pods.go:89] "coredns-7c65d6cfc9-ppz9h" [df6d7368-6c3b-4b25-8a3f-d869da9706ef] Running
I0923 11:37:05.813771 2903914 system_pods.go:89] "csi-hostpath-attacher-0" [6b1ea063-3b87-466f-a2fd-3dd5701e0462] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0923 11:37:05.813777 2903914 system_pods.go:89] "csi-hostpath-resizer-0" [03a7bf37-3f70-4482-9e07-8da7e44d10f8] Running
I0923 11:37:05.813785 2903914 system_pods.go:89] "csi-hostpathplugin-zdwf8" [0e01d919-bfa0-4762-80e4-151ab70fcb25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0923 11:37:05.813789 2903914 system_pods.go:89] "etcd-addons-348379" [a0b7a23e-a57c-46e6-b1ee-1fd9b9b61f39] Running
I0923 11:37:05.813793 2903914 system_pods.go:89] "kindnet-4kcdh" [9c1486e9-05ee-4dd1-827f-25928ce8bfab] Running
I0923 11:37:05.813798 2903914 system_pods.go:89] "kube-apiserver-addons-348379" [cf08d3aa-855d-4e43-9278-18058aa83802] Running
I0923 11:37:05.813810 2903914 system_pods.go:89] "kube-controller-manager-addons-348379" [2c507394-9458-4700-b37f-ab54a3e3ffd2] Running
I0923 11:37:05.813815 2903914 system_pods.go:89] "kube-ingress-dns-minikube" [5189817c-c6f5-4bcd-9fd7-9867cc0b7a40] Running
I0923 11:37:05.813824 2903914 system_pods.go:89] "kube-proxy-nqbmm" [2feda6b0-b4da-478a-b557-8a5f1559e17c] Running
I0923 11:37:05.813828 2903914 system_pods.go:89] "kube-scheduler-addons-348379" [941de3db-cdcd-4466-aecb-03dda1815396] Running
I0923 11:37:05.813835 2903914 system_pods.go:89] "metrics-server-84c5f94fbc-dgpbq" [7eafdf5f-4ae3-46ac-a5af-965235e8c031] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0923 11:37:05.813845 2903914 system_pods.go:89] "nvidia-device-plugin-daemonset-xqqn9" [ee088a28-253c-4cbb-bd5e-e8798378f50c] Running
I0923 11:37:05.813851 2903914 system_pods.go:89] "registry-66c9cd494c-fhm8g" [e86ab41a-1d3c-4fd0-8e39-126f3b789212] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0923 11:37:05.813859 2903914 system_pods.go:89] "registry-proxy-7qmf5" [172e9514-645d-4b65-8403-0862836b34c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0923 11:37:05.813866 2903914 system_pods.go:89] "snapshot-controller-56fcc65765-d47ng" [7b9ecb53-7d47-4524-b0ec-66629a7adf6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 11:37:05.813877 2903914 system_pods.go:89] "snapshot-controller-56fcc65765-dchr7" [31ede34d-4ccd-4d01-993c-af062382b536] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 11:37:05.813881 2903914 system_pods.go:89] "storage-provisioner" [19a8b37d-4eee-4889-ab52-103cca27383e] Running
I0923 11:37:05.813889 2903914 system_pods.go:126] duration metric: took 9.896579ms to wait for k8s-apps to be running ...
I0923 11:37:05.813904 2903914 system_svc.go:44] waiting for kubelet service to be running ....
I0923 11:37:05.813964 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0923 11:37:05.826195 2903914 system_svc.go:56] duration metric: took 12.281298ms WaitForService to wait for kubelet
I0923 11:37:05.826224 2903914 kubeadm.go:582] duration metric: took 27.489446441s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0923 11:37:05.826246 2903914 node_conditions.go:102] verifying NodePressure condition ...
I0923 11:37:05.829405 2903914 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0923 11:37:05.829438 2903914 node_conditions.go:123] node cpu capacity is 2
I0923 11:37:05.829451 2903914 node_conditions.go:105] duration metric: took 3.199247ms to run NodePressure ...
I0923 11:37:05.829481 2903914 start.go:241] waiting for startup goroutines ...
I0923 11:37:06.016364 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:06.105089 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:06.297935 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:06.512579 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:06.605014 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:06.797841 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:07.011801 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:07.104294 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:07.298088 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:07.511664 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:07.604893 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:07.801940 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:08.013764 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:08.105345 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:08.299766 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:08.512208 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:08.605989 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:08.797920 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:09.013457 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:09.113285 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:09.298479 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:09.519639 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:09.621187 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:09.798092 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:10.023070 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:10.120870 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:10.299336 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:10.511998 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:10.604502 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:10.798656 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:11.011605 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:11.104357 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:11.297872 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:11.510891 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:11.605182 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:11.798112 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:12.016742 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:12.106392 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:12.302890 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:12.514134 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:12.606249 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:12.802344 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:13.012829 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:13.106630 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:13.297952 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:13.520400 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:13.609375 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:13.799034 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:14.014344 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:14.118058 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:14.298788 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:14.510806 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:14.631742 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:14.797720 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:15.019826 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:15.106226 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:15.298830 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:15.511804 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:15.605411 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:15.798246 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:16.012028 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:16.104868 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:16.297740 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:16.524665 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:16.625850 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:16.797603 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:17.011338 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:17.103818 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:17.298307 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:17.512577 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:17.605193 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:17.798142 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:18.013543 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:18.105035 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:18.297589 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:18.512084 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:18.605341 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:18.798244 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:19.012543 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:19.113664 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:19.298444 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:19.511599 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:19.606804 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:19.798859 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:20.017663 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:20.106755 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:20.297564 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:20.512160 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:20.604958 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:20.797812 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:21.013732 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:21.105616 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:21.298921 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:21.510795 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:21.604820 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:21.797800 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:22.012174 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:22.114441 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:22.298237 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 11:37:22.513158 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:22.625275 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:22.799161 2903914 kapi.go:107] duration metric: took 33.005400732s to wait for kubernetes.io/minikube-addons=registry ...
I0923 11:37:23.013096 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:23.105126 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:23.511910 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:23.613223 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:24.014075 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:24.105028 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:24.511000 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:24.605248 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:25.014495 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:25.111460 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:25.512685 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:25.612274 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:26.013492 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:26.105721 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:26.511127 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:26.605859 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:27.012240 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:27.105210 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:27.511957 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:27.604964 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:28.012182 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:28.104813 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:28.511895 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:28.605303 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:29.013053 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:29.104426 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:29.511346 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:29.603992 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:30.014157 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:30.118062 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:30.511690 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:30.604969 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:31.014750 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:31.105518 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:31.511111 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:31.605002 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:32.012362 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:32.104960 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:32.511542 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:32.604045 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:33.013753 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:33.104350 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:33.512293 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:33.614483 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 11:37:34.016127 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:34.104734 2903914 kapi.go:107] duration metric: took 44.005304527s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0923 11:37:34.510755 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:35.019903 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:35.510951 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:36.014603 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:36.510966 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:37.016710 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:37.511532 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:38.013339 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:38.511853 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:39.012335 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:39.511569 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:40.013791 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:40.511129 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:41.011102 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:41.510793 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:42.015629 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:42.512029 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:43.011711 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:43.510801 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:44.011721 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:44.511165 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:45.039643 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:45.511176 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:46.011883 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:46.511761 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:47.011001 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:47.511048 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:48.012925 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:48.511632 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:49.010792 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:49.511167 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:50.018047 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:50.511473 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:51.013208 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:51.511710 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:52.011604 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:52.512230 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:53.011680 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:53.511429 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:54.012725 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:54.511977 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:55.013759 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:55.512155 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:56.012542 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:56.512221 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 11:37:57.011684 2903914 kapi.go:107] duration metric: took 1m9.505004156s to wait for app.kubernetes.io/name=ingress-nginx ...
I0923 11:42:51.886491 2903914 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=gcp-auth" : [client rate limiter Wait returned an error: context deadline exceeded]
I0923 11:42:51.886551 2903914 kapi.go:107] duration metric: took 6m0.000318633s to wait for kubernetes.io/minikube-addons=gcp-auth ...
W0923 11:42:51.886644 2903914 out.go:270] ! Enabling 'gcp-auth' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=gcp-auth pods: context deadline exceeded]
I0923 11:42:51.888644 2903914 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, cloud-spanner, storage-provisioner, volcano, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress
I0923 11:42:51.890515 2903914 addons.go:510] duration metric: took 6m13.553319939s for enable addons: enabled=[ingress-dns nvidia-device-plugin default-storageclass storage-provisioner-rancher cloud-spanner storage-provisioner volcano metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress]
I0923 11:42:51.890575 2903914 start.go:246] waiting for cluster config update ...
I0923 11:42:51.890600 2903914 start.go:255] writing updated cluster config ...
I0923 11:42:51.890918 2903914 ssh_runner.go:195] Run: rm -f paused
I0923 11:42:52.246444 2903914 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0923 11:42:52.248672 2903914 out.go:177] * Done! kubectl is now configured to use "addons-348379" cluster and "default" namespace by default
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
b6876ad1ce80c 4f725bf50aaa5 27 seconds ago Exited gadget 7 1161771f95540 gadget-xl5qc
052af9fc2b4c1 289a818c8d9c5 10 minutes ago Running controller 0 47eb6c64c7a7d ingress-nginx-controller-bc57996ff-6wq45
967992dcbaf35 ee6d597e62dc8 11 minutes ago Running csi-snapshotter 0 b44dddc3de5c5 csi-hostpathplugin-zdwf8
c27f769c6bce0 642ded511e141 11 minutes ago Running csi-provisioner 0 b44dddc3de5c5 csi-hostpathplugin-zdwf8
0faa259c6b420 922312104da8a 11 minutes ago Running liveness-probe 0 b44dddc3de5c5 csi-hostpathplugin-zdwf8
994b2f0a53275 08f6b2990811a 11 minutes ago Running hostpath 0 b44dddc3de5c5 csi-hostpathplugin-zdwf8
ccaa6826447e2 9a80d518f102c 11 minutes ago Running csi-attacher 0 3c219582bf398 csi-hostpath-attacher-0
63bc3e7f0cc00 420193b27261a 11 minutes ago Exited patch 0 6fd7a092c3487 ingress-nginx-admission-patch-bfw56
429fb2006ad16 77bdba588b953 11 minutes ago Running yakd 0 c9ad9c335e299 yakd-dashboard-67d98fc6b-j4xcr
c5db857da31e6 4d1e5c3e97420 11 minutes ago Running volume-snapshot-controller 0 311300b7a364c snapshot-controller-56fcc65765-d47ng
f35fa36f73991 c9cf76bb104e1 11 minutes ago Running registry 0 f3640fdf0d54f registry-66c9cd494c-fhm8g
d1ac50a6261dd 420193b27261a 11 minutes ago Exited create 0 895d61d9c16ad ingress-nginx-admission-create-fwt6v
ac4d50ae15f93 4d1e5c3e97420 11 minutes ago Running volume-snapshot-controller 0 698a149c7f280 snapshot-controller-56fcc65765-dchr7
ed38cab12122b 0107d56dbc0be 11 minutes ago Running node-driver-registrar 0 b44dddc3de5c5 csi-hostpathplugin-zdwf8
ba68702d656c2 3410e1561990a 11 minutes ago Running registry-proxy 0 dd955d0d91d1c registry-proxy-7qmf5
6781c22ae4a4e 5548a49bb60ba 11 minutes ago Running metrics-server 0 ffa74d42d59a0 metrics-server-84c5f94fbc-dgpbq
5e5a8e69b34fd 7ce2150c8929b 11 minutes ago Running local-path-provisioner 0 9bc0d2214e2a7 local-path-provisioner-86d989889c-h5pl9
f32c490729dc3 be9cac3585579 11 minutes ago Running cloud-spanner-emulator 0 b7452ccbf189d cloud-spanner-emulator-5b584cc74-lbht7
7bd548d7da390 a9bac31a5be8d 11 minutes ago Running nvidia-device-plugin-ctr 0 5582827316900 nvidia-device-plugin-daemonset-xqqn9
cecddb85ce0f0 487fa743e1e22 11 minutes ago Running csi-resizer 0 85d2097f2da83 csi-hostpath-resizer-0
a6cc88c765de7 1461903ec4fe9 11 minutes ago Running csi-external-health-monitor-controller 0 b44dddc3de5c5 csi-hostpathplugin-zdwf8
c37000ef28652 35508c2f890c4 12 minutes ago Running minikube-ingress-dns 0 d881770814e2f kube-ingress-dns-minikube
497f8c41b274e 2f6c962e7b831 12 minutes ago Running coredns 0 d2d48d489a636 coredns-7c65d6cfc9-ppz9h
b05bd4b18e280 ba04bb24b9575 12 minutes ago Running storage-provisioner 0 e816c2d6b5461 storage-provisioner
a7e55167b7b39 6a23fa8fd2b78 12 minutes ago Running kindnet-cni 0 9389ee1ac67f5 kindnet-4kcdh
005547c4c4723 24a140c548c07 12 minutes ago Running kube-proxy 0 a4defdda67173 kube-proxy-nqbmm
9255b7a6f4a59 7f8aa378bb47d 12 minutes ago Running kube-scheduler 0 cae78edec7e41 kube-scheduler-addons-348379
92413a7a8d6f6 279f381cb3736 12 minutes ago Running kube-controller-manager 0 731c7b79dc027 kube-controller-manager-addons-348379
20e5f68e09619 d3f53a98c0a9d 12 minutes ago Running kube-apiserver 0 bdcb66b095a48 kube-apiserver-addons-348379
8abc59946512a 27e3830e14027 12 minutes ago Running etcd 0 a8c7d184a28f5 etcd-addons-348379
==> containerd <==
Sep 23 11:48:09 addons-348379 containerd[816]: time="2024-09-23T11:48:09.117587538Z" level=info msg="PullImage \"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\""
Sep 23 11:48:09 addons-348379 containerd[816]: time="2024-09-23T11:48:09.120581872Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Sep 23 11:48:09 addons-348379 containerd[816]: time="2024-09-23T11:48:09.175115402Z" level=error msg="PullImage \"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\" failed" error="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-scheduler/manifests/sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: 401 Unauthorized"
Sep 23 11:48:09 addons-348379 containerd[816]: time="2024-09-23T11:48:09.175227787Z" level=info msg="stop pulling image docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: active requests=0, bytes read=0"
Sep 23 11:48:25 addons-348379 containerd[816]: time="2024-09-23T11:48:25.118249869Z" level=info msg="PullImage \"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\""
Sep 23 11:48:25 addons-348379 containerd[816]: time="2024-09-23T11:48:25.120520226Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Sep 23 11:48:25 addons-348379 containerd[816]: time="2024-09-23T11:48:25.171933737Z" level=error msg="PullImage \"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\" failed" error="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-webhook-manager/manifests/sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: 401 Unauthorized"
Sep 23 11:48:25 addons-348379 containerd[816]: time="2024-09-23T11:48:25.171975181Z" level=info msg="stop pulling image docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: active requests=0, bytes read=0"
Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.118491960Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\""
Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.241710606Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.243226357Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.247087403Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 128.547919ms"
Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.247131677Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.249056477Z" level=info msg="CreateContainer within sandbox \"1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1\" for container &ContainerMetadata{Name:gadget,Attempt:7,}"
Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.268117104Z" level=info msg="CreateContainer within sandbox \"1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1\" for &ContainerMetadata{Name:gadget,Attempt:7,} returns container id \"b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd\""
Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.268809400Z" level=info msg="StartContainer for \"b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd\""
Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.321530401Z" level=info msg="StartContainer for \"b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd\" returns successfully"
Sep 23 11:48:27 addons-348379 containerd[816]: time="2024-09-23T11:48:27.653119197Z" level=error msg="ExecSync for \"b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd\" failed" error="failed to exec in container: failed to start exec \"dc49331fb3f2c4a26a2f9e86949a3baefd68bd73ef3d1c82061bf43b68102fc7\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
Sep 23 11:48:27 addons-348379 containerd[816]: time="2024-09-23T11:48:27.684794612Z" level=error msg="ExecSync for \"b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd\" failed" error="failed to exec in container: failed to start exec \"019747eb7e72676a24024480f39aacf90192237b8d6124c9788b43ea4ceadfb1\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
Sep 23 11:48:27 addons-348379 containerd[816]: time="2024-09-23T11:48:27.695931597Z" level=error msg="ExecSync for \"b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd\" failed" error="failed to exec in container: failed to start exec \"cf8c091a6bd1a53b2cba362cad1080b3f4032f10863e843ca68860375e3096b9\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
Sep 23 11:48:27 addons-348379 containerd[816]: time="2024-09-23T11:48:27.819898227Z" level=info msg="shim disconnected" id=b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd namespace=k8s.io
Sep 23 11:48:27 addons-348379 containerd[816]: time="2024-09-23T11:48:27.819956180Z" level=warning msg="cleaning up after shim disconnected" id=b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd namespace=k8s.io
Sep 23 11:48:27 addons-348379 containerd[816]: time="2024-09-23T11:48:27.819967503Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep 23 11:48:28 addons-348379 containerd[816]: time="2024-09-23T11:48:28.423586349Z" level=info msg="RemoveContainer for \"c320da8356341a0dbcce4b452c8bac9e58aa8b49392b7faba1379fcdc1450bab\""
Sep 23 11:48:28 addons-348379 containerd[816]: time="2024-09-23T11:48:28.431878571Z" level=info msg="RemoveContainer for \"c320da8356341a0dbcce4b452c8bac9e58aa8b49392b7faba1379fcdc1450bab\" returns successfully"
==> coredns [497f8c41b274e14c72d9933f13fac36b6c4acf8def62c9f3205823623e58226d] <==
[INFO] 10.244.0.8:44197 - 54267 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000180151s
[INFO] 10.244.0.8:41305 - 6638 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001704017s
[INFO] 10.244.0.8:41305 - 35565 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00141999s
[INFO] 10.244.0.8:40738 - 27223 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000080918s
[INFO] 10.244.0.8:40738 - 10324 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129353s
[INFO] 10.244.0.8:56286 - 60697 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000080098s
[INFO] 10.244.0.8:56286 - 24093 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000043076s
[INFO] 10.244.0.8:43089 - 14336 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00004978s
[INFO] 10.244.0.8:43089 - 63494 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000035692s
[INFO] 10.244.0.8:57209 - 23569 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041272s
[INFO] 10.244.0.8:57209 - 18191 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041304s
[INFO] 10.244.0.8:58538 - 8344 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004349043s
[INFO] 10.244.0.8:58538 - 22686 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003876717s
[INFO] 10.244.0.8:48601 - 29559 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000050166s
[INFO] 10.244.0.8:48601 - 628 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000054556s
[INFO] 10.244.0.8:43467 - 15089 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093161s
[INFO] 10.244.0.8:43467 - 32245 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000042002s
[INFO] 10.244.0.8:54486 - 7532 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066798s
[INFO] 10.244.0.8:54486 - 50287 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038416s
[INFO] 10.244.0.8:33247 - 25963 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052963s
[INFO] 10.244.0.8:33247 - 9581 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000040123s
[INFO] 10.244.0.8:42814 - 61299 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001881616s
[INFO] 10.244.0.8:42814 - 6802 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001453827s
[INFO] 10.244.0.8:59224 - 13301 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000046384s
[INFO] 10.244.0.8:59224 - 30455 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000035372s
==> describe nodes <==
Name: addons-348379
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=addons-348379
kubernetes.io/os=linux
minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e
minikube.k8s.io/name=addons-348379
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_23T11_36_34_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-348379
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-348379"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 23 Sep 2024 11:36:30 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-348379
AcquireTime: <unset>
RenewTime: Mon, 23 Sep 2024 11:48:48 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 23 Sep 2024 11:48:17 +0000 Mon, 23 Sep 2024 11:36:27 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 23 Sep 2024 11:48:17 +0000 Mon, 23 Sep 2024 11:36:27 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 23 Sep 2024 11:48:17 +0000 Mon, 23 Sep 2024 11:36:27 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 23 Sep 2024 11:48:17 +0000 Mon, 23 Sep 2024 11:36:31 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-348379
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
System Info:
Machine ID: cb92b9be866447b6af2ef85f12013e1b
System UUID: 84447a6a-e17a-42ff-ba46-fb82e93bc172
Boot ID: d8899273-2c3a-49f7-8c9a-66d2209373ba
Kernel Version: 5.15.0-1070-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.22
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (27 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default cloud-spanner-emulator-5b584cc74-lbht7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gadget gadget-xl5qc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
ingress-nginx ingress-nginx-controller-bc57996ff-6wq45 100m (5%) 0 (0%) 90Mi (1%) 0 (0%) 12m
kube-system coredns-7c65d6cfc9-ppz9h 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 12m
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system csi-hostpathplugin-zdwf8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system etcd-addons-348379 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 12m
kube-system kindnet-4kcdh 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 12m
kube-system kube-apiserver-addons-348379 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-controller-manager-addons-348379 200m (10%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-nqbmm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-scheduler-addons-348379 100m (5%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system metrics-server-84c5f94fbc-dgpbq 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 12m
kube-system nvidia-device-plugin-daemonset-xqqn9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system registry-66c9cd494c-fhm8g 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system registry-proxy-7qmf5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system snapshot-controller-56fcc65765-d47ng 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system snapshot-controller-56fcc65765-dchr7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
local-path-storage local-path-provisioner-86d989889c-h5pl9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
volcano-system volcano-admission-7f54bd7598-s85bg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
volcano-system volcano-admission-init-f2bhm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
volcano-system volcano-controllers-5ff7c5d4db-w658s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
volcano-system volcano-scheduler-79dc4b78bb-2vx88 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
yakd-dashboard yakd-dashboard-67d98fc6b-j4xcr 0 (0%) 0 (0%) 128Mi (1%) 256Mi (3%) 12m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1050m (52%) 100m (5%)
memory 638Mi (8%) 476Mi (6%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kube-proxy
Normal NodeHasSufficientMemory 12m (x8 over 12m) kubelet Node addons-348379 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m (x7 over 12m) kubelet Node addons-348379 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m (x7 over 12m) kubelet Node addons-348379 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal Starting 12m kubelet Starting kubelet.
Warning CgroupV1 12m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 12m kubelet Node addons-348379 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m kubelet Node addons-348379 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m kubelet Node addons-348379 status is now: NodeHasSufficientPID
Normal RegisteredNode 12m node-controller Node addons-348379 event: Registered Node addons-348379 in Controller
==> dmesg <==
==> etcd [8abc59946512ab5ab4d902de194545051c7e577bd5196212bf85326ca705cd43] <==
{"level":"info","ts":"2024-09-23T11:36:27.038394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
{"level":"info","ts":"2024-09-23T11:36:27.038527Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2024-09-23T11:36:27.823331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2024-09-23T11:36:27.823382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2024-09-23T11:36:27.823400Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2024-09-23T11:36:27.823433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2024-09-23T11:36:27.823441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-23T11:36:27.823459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2024-09-23T11:36:27.823470Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-23T11:36:27.825628Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-348379 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-23T11:36:27.825777Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T11:36:27.827294Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-23T11:36:27.827491Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T11:36:27.827582Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-23T11:36:27.827732Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T11:36:27.827852Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T11:36:27.831921Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-23T11:36:27.856413Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
{"level":"info","ts":"2024-09-23T11:36:27.847318Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-23T11:36:27.847973Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-23T11:36:27.857537Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-23T11:36:27.856653Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-23T11:46:28.587946Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1709}
{"level":"info","ts":"2024-09-23T11:46:28.663638Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1709,"took":"75.159252ms","hash":39109330,"current-db-size-bytes":8142848,"current-db-size":"8.1 MB","current-db-size-in-use-bytes":4308992,"current-db-size-in-use":"4.3 MB"}
{"level":"info","ts":"2024-09-23T11:46:28.663685Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":39109330,"revision":1709,"compact-revision":-1}
==> kernel <==
11:48:54 up 1 day, 19:31, 0 users, load average: 0.54, 0.52, 1.53
Linux addons-348379 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [a7e55167b7b39430011e4998f44d30eaec7edc15414dc232724161a09a27e599] <==
I0923 11:46:50.320018 1 main.go:299] handling current node
I0923 11:47:00.315432 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0923 11:47:00.315477 1 main.go:299] handling current node
I0923 11:47:10.312153 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0923 11:47:10.312207 1 main.go:299] handling current node
I0923 11:47:20.315766 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0923 11:47:20.315803 1 main.go:299] handling current node
I0923 11:47:30.315485 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0923 11:47:30.315722 1 main.go:299] handling current node
I0923 11:47:40.312339 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0923 11:47:40.312385 1 main.go:299] handling current node
I0923 11:47:50.321041 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0923 11:47:50.321081 1 main.go:299] handling current node
I0923 11:48:00.327253 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0923 11:48:00.327544 1 main.go:299] handling current node
I0923 11:48:10.312132 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0923 11:48:10.312166 1 main.go:299] handling current node
I0923 11:48:20.312412 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0923 11:48:20.312446 1 main.go:299] handling current node
I0923 11:48:30.312298 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0923 11:48:30.312427 1 main.go:299] handling current node
I0923 11:48:40.312125 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0923 11:48:40.312158 1 main.go:299] handling current node
I0923 11:48:50.320905 1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
I0923 11:48:50.320946 1 main.go:299] handling current node
==> kube-apiserver [20e5f68e09619b9d622e831b0c429aae0e245dfb5c647d9e6fd9193c6cdfedac] <==
W0923 11:44:54.755857 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
W0923 11:44:54.883323 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
E0923 11:44:54.883370 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
W0923 11:44:54.884993 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
W0923 11:45:54.765261 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
E0923 11:45:54.765303 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
W0923 11:45:54.767161 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
W0923 11:45:54.891992 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
E0923 11:45:54.892034 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
W0923 11:45:54.893739 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
W0923 11:46:54.776480 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
E0923 11:46:54.776526 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
W0923 11:46:54.778226 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
W0923 11:46:54.899951 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
E0923 11:46:54.900006 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
W0923 11:46:54.901791 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
W0923 11:47:47.180225 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
E0923 11:47:47.180267 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
W0923 11:47:47.182017 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
W0923 11:47:54.783948 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
E0923 11:47:54.783991 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
W0923 11:47:54.785615 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
W0923 11:47:54.908274 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
E0923 11:47:54.908316 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
W0923 11:47:54.909972 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
==> kube-controller-manager [92413a7a8d6f6d058e961a0759535062b02d7d50e5012e81614e6290ce3465b3] <==
E0923 11:44:54.756577 1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
E0923 11:44:54.757872 1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
E0923 11:44:54.885583 1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
E0923 11:44:54.886775 1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
E0923 11:45:54.767939 1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
E0923 11:45:54.769008 1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
E0923 11:45:54.894366 1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
E0923 11:45:54.895439 1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
E0923 11:46:54.778891 1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
E0923 11:46:54.780011 1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
E0923 11:46:54.902541 1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
E0923 11:46:54.903797 1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
I0923 11:47:47.182800 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="7.665714ms"
E0923 11:47:47.182839 1 replica_set.go:560] "Unhandled Error" err="sync \"gcp-auth/gcp-auth-89d5ffd79\" failed with Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
E0923 11:47:54.786207 1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
E0923 11:47:54.787446 1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
E0923 11:47:54.910691 1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
E0923 11:47:54.912612 1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
I0923 11:48:17.574828 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-348379"
I0923 11:48:20.132889 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="65.493µs"
I0923 11:48:20.146254 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-79dc4b78bb" duration="49.452µs"
I0923 11:48:33.131858 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-79dc4b78bb" duration="68.085µs"
I0923 11:48:35.131468 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="44.004µs"
I0923 11:48:38.129745 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
I0923 11:48:52.131254 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
==> kube-proxy [005547c4c4723cd6ce4dac939ebab2b2d89e428b99ec971a179497842dcb5abe] <==
I0923 11:36:39.711887 1 server_linux.go:66] "Using iptables proxy"
I0923 11:36:39.808570 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0923 11:36:39.808647 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0923 11:36:39.869255 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0923 11:36:39.869320 1 server_linux.go:169] "Using iptables Proxier"
I0923 11:36:39.872284 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0923 11:36:39.872851 1 server.go:483] "Version info" version="v1.31.1"
I0923 11:36:39.872865 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0923 11:36:39.885041 1 config.go:199] "Starting service config controller"
I0923 11:36:39.885075 1 shared_informer.go:313] Waiting for caches to sync for service config
I0923 11:36:39.885100 1 config.go:105] "Starting endpoint slice config controller"
I0923 11:36:39.885105 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0923 11:36:39.886870 1 config.go:328] "Starting node config controller"
I0923 11:36:39.886882 1 shared_informer.go:313] Waiting for caches to sync for node config
I0923 11:36:39.986150 1 shared_informer.go:320] Caches are synced for service config
I0923 11:36:39.986235 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0923 11:36:39.987256 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [9255b7a6f4a590493be04a5bbbdf14b0efddfe1f321b8a25d2eed1055c6741df] <==
W0923 11:36:31.587742 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0923 11:36:31.587843 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 11:36:31.586774 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0923 11:36:31.587943 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0923 11:36:31.586831 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0923 11:36:31.588052 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 11:36:31.586859 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0923 11:36:31.588221 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 11:36:31.586913 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0923 11:36:31.588380 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0923 11:36:31.586991 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0923 11:36:31.588540 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 11:36:31.587046 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0923 11:36:31.588655 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0923 11:36:31.587093 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0923 11:36:31.588819 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 11:36:31.587129 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0923 11:36:31.588921 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 11:36:31.587176 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0923 11:36:31.589070 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0923 11:36:31.587223 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0923 11:36:31.589185 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0923 11:36:31.587261 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0923 11:36:31.589294 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
I0923 11:36:32.574074 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 23 11:48:20 addons-348379 kubelet[1462]: E0923 11:48:20.117988 1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
Sep 23 11:48:20 addons-348379 kubelet[1462]: E0923 11:48:20.118030 1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
Sep 23 11:48:22 addons-348379 kubelet[1462]: I0923 11:48:22.116350 1462 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-fhm8g" secret="" err="secret \"gcp-auth\" not found"
Sep 23 11:48:25 addons-348379 kubelet[1462]: E0923 11:48:25.172344 1462 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-webhook-manager/manifests/sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: 401 Unauthorized" image="docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
Sep 23 11:48:25 addons-348379 kubelet[1462]: E0923 11:48:25.172431 1462 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-webhook-manager/manifests/sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: 401 Unauthorized" image="docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
Sep 23 11:48:25 addons-348379 kubelet[1462]: E0923 11:48:25.173095 1462 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:main,Image:docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e,Command:[./gen-admission-secret.sh --service volcano-admission-service --namespace volcano-system --secret volcano-admission-secret],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzxvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessageP
olicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod volcano-admission-init-f2bhm_volcano-system(5ae35267-183a-42ba-96bc-03dac14139ac): ErrImagePull: failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-webhook-manager/manifests/sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: 401 Unauthorized" logger="UnhandledError"
Sep 23 11:48:25 addons-348379 kubelet[1462]: E0923 11:48:25.174440 1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\": failed to resolve reference \\\"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-webhook-manager/manifests/sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: 401 Unauthorized\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
Sep 23 11:48:26 addons-348379 kubelet[1462]: I0923 11:48:26.116758 1462 scope.go:117] "RemoveContainer" containerID="c320da8356341a0dbcce4b452c8bac9e58aa8b49392b7faba1379fcdc1450bab"
Sep 23 11:48:27 addons-348379 kubelet[1462]: E0923 11:48:27.653857 1462 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"dc49331fb3f2c4a26a2f9e86949a3baefd68bd73ef3d1c82061bf43b68102fc7\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd" cmd=["/bin/gadgettracermanager","-liveness"]
Sep 23 11:48:27 addons-348379 kubelet[1462]: E0923 11:48:27.685041 1462 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"019747eb7e72676a24024480f39aacf90192237b8d6124c9788b43ea4ceadfb1\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd" cmd=["/bin/gadgettracermanager","-liveness"]
Sep 23 11:48:27 addons-348379 kubelet[1462]: E0923 11:48:27.696184 1462 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"cf8c091a6bd1a53b2cba362cad1080b3f4032f10863e843ca68860375e3096b9\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd" cmd=["/bin/gadgettracermanager","-liveness"]
Sep 23 11:48:28 addons-348379 kubelet[1462]: I0923 11:48:28.116782 1462 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xqqn9" secret="" err="secret \"gcp-auth\" not found"
Sep 23 11:48:28 addons-348379 kubelet[1462]: I0923 11:48:28.416858 1462 scope.go:117] "RemoveContainer" containerID="c320da8356341a0dbcce4b452c8bac9e58aa8b49392b7faba1379fcdc1450bab"
Sep 23 11:48:28 addons-348379 kubelet[1462]: I0923 11:48:28.417440 1462 scope.go:117] "RemoveContainer" containerID="b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd"
Sep 23 11:48:28 addons-348379 kubelet[1462]: E0923 11:48:28.418317 1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xl5qc_gadget(41dc3424-c623-4804-93de-616b2916d6ed)\"" pod="gadget/gadget-xl5qc" podUID="41dc3424-c623-4804-93de-616b2916d6ed"
Sep 23 11:48:31 addons-348379 kubelet[1462]: I0923 11:48:31.107136 1462 scope.go:117] "RemoveContainer" containerID="b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd"
Sep 23 11:48:31 addons-348379 kubelet[1462]: E0923 11:48:31.107852 1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xl5qc_gadget(41dc3424-c623-4804-93de-616b2916d6ed)\"" pod="gadget/gadget-xl5qc" podUID="41dc3424-c623-4804-93de-616b2916d6ed"
Sep 23 11:48:33 addons-348379 kubelet[1462]: E0923 11:48:33.118414 1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
Sep 23 11:48:35 addons-348379 kubelet[1462]: E0923 11:48:35.117742 1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
Sep 23 11:48:38 addons-348379 kubelet[1462]: E0923 11:48:38.118766 1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
Sep 23 11:48:43 addons-348379 kubelet[1462]: I0923 11:48:43.117018 1462 scope.go:117] "RemoveContainer" containerID="b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd"
Sep 23 11:48:43 addons-348379 kubelet[1462]: E0923 11:48:43.117220 1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xl5qc_gadget(41dc3424-c623-4804-93de-616b2916d6ed)\"" pod="gadget/gadget-xl5qc" podUID="41dc3424-c623-4804-93de-616b2916d6ed"
Sep 23 11:48:46 addons-348379 kubelet[1462]: E0923 11:48:46.117696 1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
Sep 23 11:48:47 addons-348379 kubelet[1462]: E0923 11:48:47.117833 1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
Sep 23 11:48:52 addons-348379 kubelet[1462]: E0923 11:48:52.117495 1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
==> storage-provisioner [b05bd4b18e2804c1706af6011e03e349f643f23d7f968ca74ffb0f2eaf78047d] <==
I0923 11:36:43.550112 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0923 11:36:43.591903 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0923 11:36:43.592066 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0923 11:36:43.604224 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0923 11:36:43.604420 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-348379_e926b365-1f0f-4822-899e-75d077991921!
I0923 11:36:43.605459 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cdf468f-763c-4a54-98e3-d90ea0e2e8e5", APIVersion:"v1", ResourceVersion:"521", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-348379_e926b365-1f0f-4822-899e-75d077991921 became leader
I0923 11:36:43.707026 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-348379_e926b365-1f0f-4822-899e-75d077991921!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-348379 -n addons-348379
helpers_test.go:261: (dbg) Run: kubectl --context addons-348379 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88: exit status 1 (104.979406ms)
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-fwt6v" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-bfw56" not found
Error from server (NotFound): pods "volcano-admission-7f54bd7598-s85bg" not found
Error from server (NotFound): pods "volcano-admission-init-f2bhm" not found
Error from server (NotFound): pods "volcano-controllers-5ff7c5d4db-w658s" not found
Error from server (NotFound): pods "volcano-scheduler-79dc4b78bb-2vx88" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88: exit status 1
--- FAIL: TestAddons/serial/Volcano (363.32s)