=== RUN TestSkaffold
skaffold_test.go:59: (dbg) Run: /tmp/skaffold.exe608739235 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run: out/minikube-linux-amd64 start -p skaffold-600759 --memory=3072 --driver=docker --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-600759 --memory=3072 --driver=docker --container-runtime=docker: (23.704360403s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run: /tmp/skaffold.exe608739235 run --minikube-profile skaffold-600759 --kube-context skaffold-600759 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Non-zero exit: /tmp/skaffold.exe608739235 run --minikube-profile skaffold-600759 --kube-context skaffold-600759 --status-check=true --port-forward=false --interactive=false: exit status 1 (6.710427243s)
-- stdout --
Generating tags...
- leeroy-web -> leeroy-web:latest
- leeroy-app -> leeroy-app:latest
- base -> base:latest
Some taggers failed. Rerun with -vdebug for errors.
Checking cache...
- leeroy-web: Not found. Building
- leeroy-app: Not found. Building
- base: Not found. Building
Starting build...
Found [skaffold-600759] context, using local docker daemon.
Building [base]...
Target platforms: [linux/amd64]
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM gcr.io/distroless/base
latest: Pulling from distroless/base
fd4aa3667332: Pulling fs layer
bfb59b82a9b6: Pulling fs layer
017886f7e176: Pulling fs layer
62de241dac5f: Pulling fs layer
2780920e5dbf: Pulling fs layer
7c12895b777b: Pulling fs layer
3214acf345c0: Pulling fs layer
5664b15f108b: Pulling fs layer
045fc1c20da8: Pulling fs layer
4aa0ea1413d3: Pulling fs layer
da7816fa955e: Pulling fs layer
ddf74a63f7d8: Pulling fs layer
e7fa9df358f0: Pulling fs layer
d8a0d911b13e: Pulling fs layer
5664b15f108b: Waiting
045fc1c20da8: Waiting
4aa0ea1413d3: Waiting
da7816fa955e: Waiting
ddf74a63f7d8: Waiting
e7fa9df358f0: Waiting
62de241dac5f: Waiting
2780920e5dbf: Waiting
7c12895b777b: Waiting
3214acf345c0: Waiting
d8a0d911b13e: Waiting
fd4aa3667332: Verifying Checksum
fd4aa3667332: Download complete
bfb59b82a9b6: Verifying Checksum
bfb59b82a9b6: Download complete
017886f7e176: Verifying Checksum
017886f7e176: Download complete
7c12895b777b: Verifying Checksum
7c12895b777b: Download complete
2780920e5dbf: Verifying Checksum
2780920e5dbf: Download complete
fd4aa3667332: Pull complete
bfb59b82a9b6: Pull complete
62de241dac5f: Verifying Checksum
62de241dac5f: Download complete
5664b15f108b: Download complete
3214acf345c0: Download complete
017886f7e176: Pull complete
62de241dac5f: Pull complete
045fc1c20da8: Verifying Checksum
045fc1c20da8: Download complete
2780920e5dbf: Pull complete
7c12895b777b: Pull complete
3214acf345c0: Pull complete
5664b15f108b: Pull complete
045fc1c20da8: Pull complete
4aa0ea1413d3: Verifying Checksum
4aa0ea1413d3: Download complete
da7816fa955e: Verifying Checksum
da7816fa955e: Download complete
4aa0ea1413d3: Pull complete
da7816fa955e: Pull complete
ddf74a63f7d8: Download complete
ddf74a63f7d8: Pull complete
d8a0d911b13e: Verifying Checksum
d8a0d911b13e: Download complete
e7fa9df358f0: Verifying Checksum
e7fa9df358f0: Download complete
e7fa9df358f0: Pull complete
d8a0d911b13e: Pull complete
Digest: sha256:9e9b50d2048db3741f86a48d939b4e4cc775f5889b3496439343301ff54cdba8
Status: Downloaded newer image for gcr.io/distroless/base:latest
---> 314086290b80
Step 2/3 : ENV GOTRACEBACK=single
---> Running in 00945de271c8
---> ea52c5a41e97
Step 3/3 : CMD ["./app"]
---> Running in 0809e99c6571
---> 6d137c5a8316
Successfully built 6d137c5a8316
Successfully tagged base:latest
Build [base] succeeded
Building [leeroy-app]...
Target platforms: [linux/amd64]
Sending build context to Docker daemon 4.096kB
Step 1/9 : ARG BASE
Step 2/9 : FROM golang:1.18 as builder
Building [leeroy-web]...
Target platforms: [linux/amd64]
Build [leeroy-web] was canceled
-- /stdout --
** stderr **
build [leeroy-app] failed: docker build failure: toomanyrequests: You have reached your pull rate limit as 'minikubebot': dckr_jti_W89jo-sMmu2ZeG4U1lTVn5LowXk=. You may increase the limit by upgrading. https://www.docker.com/increase-rate-limit. Please fix the Dockerfile and try again..
** /stderr **
skaffold_test.go:107: error running skaffold: exit status 1
-- stdout --
Generating tags...
- leeroy-web -> leeroy-web:latest
- leeroy-app -> leeroy-app:latest
- base -> base:latest
Some taggers failed. Rerun with -vdebug for errors.
Checking cache...
- leeroy-web: Not found. Building
- leeroy-app: Not found. Building
- base: Not found. Building
Starting build...
Found [skaffold-600759] context, using local docker daemon.
Building [base]...
Target platforms: [linux/amd64]
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM gcr.io/distroless/base
latest: Pulling from distroless/base
fd4aa3667332: Pulling fs layer
bfb59b82a9b6: Pulling fs layer
017886f7e176: Pulling fs layer
62de241dac5f: Pulling fs layer
2780920e5dbf: Pulling fs layer
7c12895b777b: Pulling fs layer
3214acf345c0: Pulling fs layer
5664b15f108b: Pulling fs layer
045fc1c20da8: Pulling fs layer
4aa0ea1413d3: Pulling fs layer
da7816fa955e: Pulling fs layer
ddf74a63f7d8: Pulling fs layer
e7fa9df358f0: Pulling fs layer
d8a0d911b13e: Pulling fs layer
5664b15f108b: Waiting
045fc1c20da8: Waiting
4aa0ea1413d3: Waiting
da7816fa955e: Waiting
ddf74a63f7d8: Waiting
e7fa9df358f0: Waiting
62de241dac5f: Waiting
2780920e5dbf: Waiting
7c12895b777b: Waiting
3214acf345c0: Waiting
d8a0d911b13e: Waiting
fd4aa3667332: Verifying Checksum
fd4aa3667332: Download complete
bfb59b82a9b6: Verifying Checksum
bfb59b82a9b6: Download complete
017886f7e176: Verifying Checksum
017886f7e176: Download complete
7c12895b777b: Verifying Checksum
7c12895b777b: Download complete
2780920e5dbf: Verifying Checksum
2780920e5dbf: Download complete
fd4aa3667332: Pull complete
bfb59b82a9b6: Pull complete
62de241dac5f: Verifying Checksum
62de241dac5f: Download complete
5664b15f108b: Download complete
3214acf345c0: Download complete
017886f7e176: Pull complete
62de241dac5f: Pull complete
045fc1c20da8: Verifying Checksum
045fc1c20da8: Download complete
2780920e5dbf: Pull complete
7c12895b777b: Pull complete
3214acf345c0: Pull complete
5664b15f108b: Pull complete
045fc1c20da8: Pull complete
4aa0ea1413d3: Verifying Checksum
4aa0ea1413d3: Download complete
da7816fa955e: Verifying Checksum
da7816fa955e: Download complete
4aa0ea1413d3: Pull complete
da7816fa955e: Pull complete
ddf74a63f7d8: Download complete
ddf74a63f7d8: Pull complete
d8a0d911b13e: Verifying Checksum
d8a0d911b13e: Download complete
e7fa9df358f0: Verifying Checksum
e7fa9df358f0: Download complete
e7fa9df358f0: Pull complete
d8a0d911b13e: Pull complete
Digest: sha256:9e9b50d2048db3741f86a48d939b4e4cc775f5889b3496439343301ff54cdba8
Status: Downloaded newer image for gcr.io/distroless/base:latest
---> 314086290b80
Step 2/3 : ENV GOTRACEBACK=single
---> Running in 00945de271c8
---> ea52c5a41e97
Step 3/3 : CMD ["./app"]
---> Running in 0809e99c6571
---> 6d137c5a8316
Successfully built 6d137c5a8316
Successfully tagged base:latest
Build [base] succeeded
Building [leeroy-app]...
Target platforms: [linux/amd64]
Sending build context to Docker daemon 4.096kB
Step 1/9 : ARG BASE
Step 2/9 : FROM golang:1.18 as builder
Building [leeroy-web]...
Target platforms: [linux/amd64]
Build [leeroy-web] was canceled
-- /stdout --
** stderr **
build [leeroy-app] failed: docker build failure: toomanyrequests: You have reached your pull rate limit as 'minikubebot': dckr_jti_W89jo-sMmu2ZeG4U1lTVn5LowXk=. You may increase the limit by upgrading. https://www.docker.com/increase-rate-limit. Please fix the Dockerfile and try again..
** /stderr **
panic.go:636: *** TestSkaffold FAILED at 2025-10-13 14:09:06.439688958 +0000 UTC m=+2128.535391278
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestSkaffold]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect skaffold-600759
helpers_test.go:243: (dbg) docker inspect skaffold-600759:
-- stdout --
[
{
"Id": "0bc01f8b66b5e94656af3fbfed074a68c6a484b637b9bf1ab5a9b97d4ed83699",
"Created": "2025-10-13T14:08:40.721574238Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1086854,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-10-13T14:08:40.7573264Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
"ResolvConfPath": "/var/lib/docker/containers/0bc01f8b66b5e94656af3fbfed074a68c6a484b637b9bf1ab5a9b97d4ed83699/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/0bc01f8b66b5e94656af3fbfed074a68c6a484b637b9bf1ab5a9b97d4ed83699/hostname",
"HostsPath": "/var/lib/docker/containers/0bc01f8b66b5e94656af3fbfed074a68c6a484b637b9bf1ab5a9b97d4ed83699/hosts",
"LogPath": "/var/lib/docker/containers/0bc01f8b66b5e94656af3fbfed074a68c6a484b637b9bf1ab5a9b97d4ed83699/0bc01f8b66b5e94656af3fbfed074a68c6a484b637b9bf1ab5a9b97d4ed83699-json.log",
"Name": "/skaffold-600759",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"skaffold-600759:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "skaffold-600759",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "private",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": null,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "0bc01f8b66b5e94656af3fbfed074a68c6a484b637b9bf1ab5a9b97d4ed83699",
"LowerDir": "/var/lib/docker/overlay2/a78dec82b5f68e40300f51de43b83b015797c57615e143a68b4a595a8b13e561-init/diff:/var/lib/docker/overlay2/3ca0dbfe0764e1e4674a3bf7155dad506c3286fc280b31af582a3eaa6577aea9/diff",
"MergedDir": "/var/lib/docker/overlay2/a78dec82b5f68e40300f51de43b83b015797c57615e143a68b4a595a8b13e561/merged",
"UpperDir": "/var/lib/docker/overlay2/a78dec82b5f68e40300f51de43b83b015797c57615e143a68b4a595a8b13e561/diff",
"WorkDir": "/var/lib/docker/overlay2/a78dec82b5f68e40300f51de43b83b015797c57615e143a68b4a595a8b13e561/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "skaffold-600759",
"Source": "/var/lib/docker/volumes/skaffold-600759/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "skaffold-600759",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "skaffold-600759",
"name.minikube.sigs.k8s.io": "skaffold-600759",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "fb2a35cc1caedc4596f028cf8245eb3458f96b371a9d9afc221b34aea9ead76a",
"SandboxKey": "/var/run/docker/netns/fb2a35cc1cae",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33348"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33349"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33352"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33350"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33351"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"skaffold-600759": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "46:4d:85:c3:1c:f6",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "2b0dbed557b9cd4b3986c982ffdacfe098a27c674ee8363b52b08cf72487ade3",
"EndpointID": "ef59f240cc4ebb9e5d5299bdac8a2b294fc0f396c778580642ef502774a5a05e",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"skaffold-600759",
"0bc01f8b66b5"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p skaffold-600759 -n skaffold-600759
helpers_test.go:252: <<< TestSkaffold FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestSkaffold]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p skaffold-600759 logs -n 25
helpers_test.go:260: TestSkaffold logs:
-- stdout --
==> Audit <==
┌────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬──────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼──────────┼─────────┼─────────────────────┼─────────────────────┤
│ start │ -p multinode-542745-m02 --driver=docker --container-runtime=docker │ multinode-542745-m02 │ jenkins │ v1.37.0 │ 13 Oct 25 14:04 UTC │ │
│ start │ -p multinode-542745-m03 --driver=docker --container-runtime=docker │ multinode-542745-m03 │ jenkins │ v1.37.0 │ 13 Oct 25 14:04 UTC │ 13 Oct 25 14:05 UTC │
│ node │ add -p multinode-542745 │ multinode-542745 │ jenkins │ v1.37.0 │ 13 Oct 25 14:05 UTC │ │
│ delete │ -p multinode-542745-m03 │ multinode-542745-m03 │ jenkins │ v1.37.0 │ 13 Oct 25 14:05 UTC │ 13 Oct 25 14:05 UTC │
│ delete │ -p multinode-542745 │ multinode-542745 │ jenkins │ v1.37.0 │ 13 Oct 25 14:05 UTC │ 13 Oct 25 14:05 UTC │
│ start │ -p test-preload-319116 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=docker --kubernetes-version=v1.32.0 │ test-preload-319116 │ jenkins │ v1.37.0 │ 13 Oct 25 14:05 UTC │ 13 Oct 25 14:05 UTC │
│ image │ test-preload-319116 image pull gcr.io/k8s-minikube/busybox │ test-preload-319116 │ jenkins │ v1.37.0 │ 13 Oct 25 14:05 UTC │ 13 Oct 25 14:05 UTC │
│ stop │ -p test-preload-319116 │ test-preload-319116 │ jenkins │ v1.37.0 │ 13 Oct 25 14:05 UTC │ 13 Oct 25 14:05 UTC │
│ start │ -p test-preload-319116 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker --container-runtime=docker │ test-preload-319116 │ jenkins │ v1.37.0 │ 13 Oct 25 14:05 UTC │ 13 Oct 25 14:06 UTC │
│ image │ test-preload-319116 image list │ test-preload-319116 │ jenkins │ v1.37.0 │ 13 Oct 25 14:06 UTC │ 13 Oct 25 14:06 UTC │
│ delete │ -p test-preload-319116 │ test-preload-319116 │ jenkins │ v1.37.0 │ 13 Oct 25 14:06 UTC │ 13 Oct 25 14:06 UTC │
│ start │ -p scheduled-stop-902075 --memory=3072 --driver=docker --container-runtime=docker │ scheduled-stop-902075 │ jenkins │ v1.37.0 │ 13 Oct 25 14:06 UTC │ 13 Oct 25 14:07 UTC │
│ stop │ -p scheduled-stop-902075 --schedule 5m │ scheduled-stop-902075 │ jenkins │ v1.37.0 │ 13 Oct 25 14:07 UTC │ │
│ stop │ -p scheduled-stop-902075 --schedule 5m │ scheduled-stop-902075 │ jenkins │ v1.37.0 │ 13 Oct 25 14:07 UTC │ │
│ stop │ -p scheduled-stop-902075 --schedule 5m │ scheduled-stop-902075 │ jenkins │ v1.37.0 │ 13 Oct 25 14:07 UTC │ │
│ stop │ -p scheduled-stop-902075 --schedule 15s │ scheduled-stop-902075 │ jenkins │ v1.37.0 │ 13 Oct 25 14:07 UTC │ │
│ stop │ -p scheduled-stop-902075 --schedule 15s │ scheduled-stop-902075 │ jenkins │ v1.37.0 │ 13 Oct 25 14:07 UTC │ │
│ stop │ -p scheduled-stop-902075 --schedule 15s │ scheduled-stop-902075 │ jenkins │ v1.37.0 │ 13 Oct 25 14:07 UTC │ │
│ stop │ -p scheduled-stop-902075 --cancel-scheduled │ scheduled-stop-902075 │ jenkins │ v1.37.0 │ 13 Oct 25 14:07 UTC │ 13 Oct 25 14:07 UTC │
│ stop │ -p scheduled-stop-902075 --schedule 15s │ scheduled-stop-902075 │ jenkins │ v1.37.0 │ 13 Oct 25 14:07 UTC │ │
│ stop │ -p scheduled-stop-902075 --schedule 15s │ scheduled-stop-902075 │ jenkins │ v1.37.0 │ 13 Oct 25 14:07 UTC │ │
│ stop │ -p scheduled-stop-902075 --schedule 15s │ scheduled-stop-902075 │ jenkins │ v1.37.0 │ 13 Oct 25 14:07 UTC │ 13 Oct 25 14:08 UTC │
│ delete │ -p scheduled-stop-902075 │ scheduled-stop-902075 │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
│ start │ -p skaffold-600759 --memory=3072 --driver=docker --container-runtime=docker │ skaffold-600759 │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
│ docker-env │ --shell none -p skaffold-600759 --user=skaffold │ skaffold-600759 │ skaffold │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:09 UTC │
└────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴──────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/10/13 14:08:35
Running on machine: ubuntu-20-agent-10
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1013 14:08:35.922555 1086283 out.go:360] Setting OutFile to fd 1 ...
I1013 14:08:35.922638 1086283 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 14:08:35.922641 1086283 out.go:374] Setting ErrFile to fd 2...
I1013 14:08:35.922644 1086283 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 14:08:35.922837 1086283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-845765/.minikube/bin
I1013 14:08:35.923337 1086283 out.go:368] Setting JSON to false
I1013 14:08:35.924329 1086283 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":24649,"bootTime":1760339867,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1013 14:08:35.924422 1086283 start.go:141] virtualization: kvm guest
I1013 14:08:35.926680 1086283 out.go:179] * [skaffold-600759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1013 14:08:35.927861 1086283 out.go:179] - MINIKUBE_LOCATION=21724
I1013 14:08:35.927878 1086283 notify.go:220] Checking for updates...
I1013 14:08:35.929727 1086283 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1013 14:08:35.930713 1086283 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21724-845765/kubeconfig
I1013 14:08:35.932189 1086283 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-845765/.minikube
I1013 14:08:35.933105 1086283 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1013 14:08:35.934008 1086283 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1013 14:08:35.934989 1086283 driver.go:421] Setting default libvirt URI to qemu:///system
I1013 14:08:35.958782 1086283 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
I1013 14:08:35.958860 1086283 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1013 14:08:36.012897 1086283 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-13 14:08:36.003302012 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1013 14:08:36.012995 1086283 docker.go:318] overlay module found
I1013 14:08:36.014560 1086283 out.go:179] * Using the docker driver based on user configuration
I1013 14:08:36.015561 1086283 start.go:305] selected driver: docker
I1013 14:08:36.015567 1086283 start.go:925] validating driver "docker" against <nil>
I1013 14:08:36.015576 1086283 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1013 14:08:36.016123 1086283 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1013 14:08:36.073209 1086283 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-13 14:08:36.063498532 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1013 14:08:36.073378 1086283 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1013 14:08:36.073579 1086283 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
I1013 14:08:36.075199 1086283 out.go:179] * Using Docker driver with root privileges
I1013 14:08:36.076165 1086283 cni.go:84] Creating CNI manager for ""
I1013 14:08:36.076225 1086283 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1013 14:08:36.076233 1086283 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1013 14:08:36.076295 1086283 start.go:349] cluster config:
{Name:skaffold-600759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:skaffold-600759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1013 14:08:36.077574 1086283 out.go:179] * Starting "skaffold-600759" primary control-plane node in "skaffold-600759" cluster
I1013 14:08:36.078614 1086283 cache.go:123] Beginning downloading kic base image for docker with docker
I1013 14:08:36.079788 1086283 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
I1013 14:08:36.080822 1086283 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1013 14:08:36.080865 1086283 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-845765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
I1013 14:08:36.080872 1086283 cache.go:58] Caching tarball of preloaded images
I1013 14:08:36.080921 1086283 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
I1013 14:08:36.080974 1086283 preload.go:233] Found /home/jenkins/minikube-integration/21724-845765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1013 14:08:36.080984 1086283 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
I1013 14:08:36.081392 1086283 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/config.json ...
I1013 14:08:36.081423 1086283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/config.json: {Name:mk58ae9485859341196626921a5f8128471ddab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1013 14:08:36.100365 1086283 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
I1013 14:08:36.100389 1086283 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
I1013 14:08:36.100404 1086283 cache.go:232] Successfully downloaded all kic artifacts
I1013 14:08:36.100426 1086283 start.go:360] acquireMachinesLock for skaffold-600759: {Name:mke496305f5e5c038a027d04d6cd8b1852188c64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1013 14:08:36.100517 1086283 start.go:364] duration metric: took 79.62µs to acquireMachinesLock for "skaffold-600759"
I1013 14:08:36.100536 1086283 start.go:93] Provisioning new machine with config: &{Name:skaffold-600759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:skaffold-600759 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I1013 14:08:36.100595 1086283 start.go:125] createHost starting for "" (driver="docker")
I1013 14:08:36.102215 1086283 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1013 14:08:36.102470 1086283 start.go:159] libmachine.API.Create for "skaffold-600759" (driver="docker")
I1013 14:08:36.102492 1086283 client.go:168] LocalClient.Create starting
I1013 14:08:36.102575 1086283 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca.pem
I1013 14:08:36.102602 1086283 main.go:141] libmachine: Decoding PEM data...
I1013 14:08:36.102616 1086283 main.go:141] libmachine: Parsing certificate...
I1013 14:08:36.102676 1086283 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-845765/.minikube/certs/cert.pem
I1013 14:08:36.102689 1086283 main.go:141] libmachine: Decoding PEM data...
I1013 14:08:36.102695 1086283 main.go:141] libmachine: Parsing certificate...
I1013 14:08:36.103005 1086283 cli_runner.go:164] Run: docker network inspect skaffold-600759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1013 14:08:36.118807 1086283 cli_runner.go:211] docker network inspect skaffold-600759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1013 14:08:36.118870 1086283 network_create.go:284] running [docker network inspect skaffold-600759] to gather additional debugging logs...
I1013 14:08:36.118883 1086283 cli_runner.go:164] Run: docker network inspect skaffold-600759
W1013 14:08:36.135784 1086283 cli_runner.go:211] docker network inspect skaffold-600759 returned with exit code 1
I1013 14:08:36.135798 1086283 network_create.go:287] error running [docker network inspect skaffold-600759]: docker network inspect skaffold-600759: exit status 1
stdout:
[]
stderr:
Error response from daemon: network skaffold-600759 not found
I1013 14:08:36.135809 1086283 network_create.go:289] output of [docker network inspect skaffold-600759]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network skaffold-600759 not found
** /stderr **
I1013 14:08:36.135891 1086283 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1013 14:08:36.152435 1086283 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ef0be46c41b2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:64:18:f7:35:96} reservation:<nil>}
I1013 14:08:36.152919 1086283 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-55c6e9b40aad IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:d2:9c:d4:2e:2c} reservation:<nil>}
I1013 14:08:36.153466 1086283 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-86d040a1ec93 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fe:91:83:6e:42:82} reservation:<nil>}
I1013 14:08:36.154210 1086283 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d67a30}
I1013 14:08:36.154229 1086283 network_create.go:124] attempt to create docker network skaffold-600759 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I1013 14:08:36.154279 1086283 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-600759 skaffold-600759
I1013 14:08:36.209577 1086283 network_create.go:108] docker network skaffold-600759 192.168.76.0/24 created
I1013 14:08:36.209606 1086283 kic.go:121] calculated static IP "192.168.76.2" for the "skaffold-600759" container
I1013 14:08:36.209677 1086283 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1013 14:08:36.224859 1086283 cli_runner.go:164] Run: docker volume create skaffold-600759 --label name.minikube.sigs.k8s.io=skaffold-600759 --label created_by.minikube.sigs.k8s.io=true
I1013 14:08:36.241679 1086283 oci.go:103] Successfully created a docker volume skaffold-600759
I1013 14:08:36.241761 1086283 cli_runner.go:164] Run: docker run --rm --name skaffold-600759-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-600759 --entrypoint /usr/bin/test -v skaffold-600759:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
I1013 14:08:36.875454 1086283 oci.go:107] Successfully prepared a docker volume skaffold-600759
I1013 14:08:36.875494 1086283 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1013 14:08:36.875514 1086283 kic.go:194] Starting extracting preloaded images to volume ...
I1013 14:08:36.875585 1086283 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-845765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-600759:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
I1013 14:08:40.648896 1086283 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-845765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-600759:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (3.773246193s)
I1013 14:08:40.648922 1086283 kic.go:203] duration metric: took 3.773404109s to extract preloaded images to volume ...
W1013 14:08:40.649002 1086283 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W1013 14:08:40.649033 1086283 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I1013 14:08:40.649067 1086283 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1013 14:08:40.706658 1086283 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname skaffold-600759 --name skaffold-600759 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-600759 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=skaffold-600759 --network skaffold-600759 --ip 192.168.76.2 --volume skaffold-600759:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
I1013 14:08:40.966190 1086283 cli_runner.go:164] Run: docker container inspect skaffold-600759 --format={{.State.Running}}
I1013 14:08:40.983946 1086283 cli_runner.go:164] Run: docker container inspect skaffold-600759 --format={{.State.Status}}
I1013 14:08:41.001236 1086283 cli_runner.go:164] Run: docker exec skaffold-600759 stat /var/lib/dpkg/alternatives/iptables
I1013 14:08:41.045246 1086283 oci.go:144] the created container "skaffold-600759" has a running status.
I1013 14:08:41.045271 1086283 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa...
I1013 14:08:41.658406 1086283 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1013 14:08:41.682303 1086283 cli_runner.go:164] Run: docker container inspect skaffold-600759 --format={{.State.Status}}
I1013 14:08:41.700347 1086283 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1013 14:08:41.700364 1086283 kic_runner.go:114] Args: [docker exec --privileged skaffold-600759 chown docker:docker /home/docker/.ssh/authorized_keys]
I1013 14:08:41.745747 1086283 cli_runner.go:164] Run: docker container inspect skaffold-600759 --format={{.State.Status}}
I1013 14:08:41.763292 1086283 machine.go:93] provisionDockerMachine start ...
I1013 14:08:41.763368 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
I1013 14:08:41.781007 1086283 main.go:141] libmachine: Using SSH client type: native
I1013 14:08:41.781324 1086283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33348 <nil> <nil>}
I1013 14:08:41.781363 1086283 main.go:141] libmachine: About to run SSH command:
hostname
I1013 14:08:41.928441 1086283 main.go:141] libmachine: SSH cmd err, output: <nil>: skaffold-600759
I1013 14:08:41.928466 1086283 ubuntu.go:182] provisioning hostname "skaffold-600759"
I1013 14:08:41.928558 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
I1013 14:08:41.946420 1086283 main.go:141] libmachine: Using SSH client type: native
I1013 14:08:41.946669 1086283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33348 <nil> <nil>}
I1013 14:08:41.946677 1086283 main.go:141] libmachine: About to run SSH command:
sudo hostname skaffold-600759 && echo "skaffold-600759" | sudo tee /etc/hostname
I1013 14:08:42.104760 1086283 main.go:141] libmachine: SSH cmd err, output: <nil>: skaffold-600759
I1013 14:08:42.104828 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
I1013 14:08:42.122455 1086283 main.go:141] libmachine: Using SSH client type: native
I1013 14:08:42.122659 1086283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33348 <nil> <nil>}
I1013 14:08:42.122670 1086283 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sskaffold-600759' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 skaffold-600759/g' /etc/hosts;
else
echo '127.0.1.1 skaffold-600759' | sudo tee -a /etc/hosts;
fi
fi
I1013 14:08:42.270647 1086283 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1013 14:08:42.270670 1086283 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-845765/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-845765/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-845765/.minikube}
I1013 14:08:42.270695 1086283 ubuntu.go:190] setting up certificates
I1013 14:08:42.270706 1086283 provision.go:84] configureAuth start
I1013 14:08:42.270775 1086283 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-600759
I1013 14:08:42.288880 1086283 provision.go:143] copyHostCerts
I1013 14:08:42.288954 1086283 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-845765/.minikube/ca.pem, removing ...
I1013 14:08:42.288963 1086283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-845765/.minikube/ca.pem
I1013 14:08:42.289042 1086283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-845765/.minikube/ca.pem (1078 bytes)
I1013 14:08:42.289263 1086283 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-845765/.minikube/cert.pem, removing ...
I1013 14:08:42.289274 1086283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-845765/.minikube/cert.pem
I1013 14:08:42.289334 1086283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-845765/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-845765/.minikube/cert.pem (1123 bytes)
I1013 14:08:42.289441 1086283 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-845765/.minikube/key.pem, removing ...
I1013 14:08:42.289446 1086283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-845765/.minikube/key.pem
I1013 14:08:42.289484 1086283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-845765/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-845765/.minikube/key.pem (1675 bytes)
I1013 14:08:42.289561 1086283 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-845765/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca-key.pem org=jenkins.skaffold-600759 san=[127.0.0.1 192.168.76.2 localhost minikube skaffold-600759]
I1013 14:08:42.571976 1086283 provision.go:177] copyRemoteCerts
I1013 14:08:42.572037 1086283 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1013 14:08:42.572078 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
I1013 14:08:42.590262 1086283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa Username:docker}
I1013 14:08:42.695322 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1013 14:08:42.716052 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
I1013 14:08:42.735009 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1013 14:08:42.754078 1086283 provision.go:87] duration metric: took 483.355244ms to configureAuth
I1013 14:08:42.754123 1086283 ubuntu.go:206] setting minikube options for container-runtime
I1013 14:08:42.754293 1086283 config.go:182] Loaded profile config "skaffold-600759": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1013 14:08:42.754338 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
I1013 14:08:42.773776 1086283 main.go:141] libmachine: Using SSH client type: native
I1013 14:08:42.773986 1086283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33348 <nil> <nil>}
I1013 14:08:42.773992 1086283 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1013 14:08:42.923485 1086283 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I1013 14:08:42.923506 1086283 ubuntu.go:71] root file system type: overlay
I1013 14:08:42.923657 1086283 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1013 14:08:42.923744 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
I1013 14:08:42.942220 1086283 main.go:141] libmachine: Using SSH client type: native
I1013 14:08:42.942435 1086283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33348 <nil> <nil>}
I1013 14:08:42.942497 1086283 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1013 14:08:43.105542 1086283 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I1013 14:08:43.105637 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
I1013 14:08:43.124645 1086283 main.go:141] libmachine: Using SSH client type: native
I1013 14:08:43.124916 1086283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33348 <nil> <nil>}
I1013 14:08:43.124936 1086283 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1013 14:08:44.347433 1086283 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2025-10-02 14:52:52.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2025-10-13 14:08:43.102309704 +0000
@@ -9,23 +9,34 @@
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
Restart=always
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
+
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1013 14:08:44.347472 1086283 machine.go:96] duration metric: took 2.584163977s to provisionDockerMachine
I1013 14:08:44.347488 1086283 client.go:171] duration metric: took 8.244990824s to LocalClient.Create
I1013 14:08:44.347515 1086283 start.go:167] duration metric: took 8.245044188s to libmachine.API.Create "skaffold-600759"
I1013 14:08:44.347524 1086283 start.go:293] postStartSetup for "skaffold-600759" (driver="docker")
I1013 14:08:44.347538 1086283 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1013 14:08:44.347610 1086283 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1013 14:08:44.347658 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
I1013 14:08:44.366367 1086283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa Username:docker}
I1013 14:08:44.473362 1086283 ssh_runner.go:195] Run: cat /etc/os-release
I1013 14:08:44.477201 1086283 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1013 14:08:44.477219 1086283 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1013 14:08:44.477230 1086283 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-845765/.minikube/addons for local assets ...
I1013 14:08:44.477281 1086283 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-845765/.minikube/files for local assets ...
I1013 14:08:44.477351 1086283 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-845765/.minikube/files/etc/ssl/certs/8494012.pem -> 8494012.pem in /etc/ssl/certs
I1013 14:08:44.477446 1086283 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1013 14:08:44.485855 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/files/etc/ssl/certs/8494012.pem --> /etc/ssl/certs/8494012.pem (1708 bytes)
I1013 14:08:44.507673 1086283 start.go:296] duration metric: took 160.130745ms for postStartSetup
I1013 14:08:44.508054 1086283 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-600759
I1013 14:08:44.526245 1086283 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/config.json ...
I1013 14:08:44.526526 1086283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1013 14:08:44.526567 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
I1013 14:08:44.544475 1086283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa Username:docker}
I1013 14:08:44.646812 1086283 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1013 14:08:44.651922 1086283 start.go:128] duration metric: took 8.551310055s to createHost
I1013 14:08:44.651943 1086283 start.go:83] releasing machines lock for "skaffold-600759", held for 8.551417925s
I1013 14:08:44.652021 1086283 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-600759
I1013 14:08:44.669860 1086283 ssh_runner.go:195] Run: cat /version.json
I1013 14:08:44.669890 1086283 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1013 14:08:44.669904 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
I1013 14:08:44.669966 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
I1013 14:08:44.688877 1086283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa Username:docker}
I1013 14:08:44.689464 1086283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa Username:docker}
I1013 14:08:44.789910 1086283 ssh_runner.go:195] Run: systemctl --version
I1013 14:08:44.848111 1086283 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1013 14:08:44.853349 1086283 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1013 14:08:44.853430 1086283 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1013 14:08:44.881490 1086283 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1013 14:08:44.881514 1086283 start.go:495] detecting cgroup driver to use...
I1013 14:08:44.881553 1086283 detect.go:190] detected "systemd" cgroup driver on host os
I1013 14:08:44.881678 1086283 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1013 14:08:44.898246 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1013 14:08:44.909661 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1013 14:08:44.919482 1086283 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
I1013 14:08:44.919540 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1013 14:08:44.929162 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1013 14:08:44.938594 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1013 14:08:44.948391 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1013 14:08:44.958316 1086283 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1013 14:08:44.967543 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1013 14:08:44.977283 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1013 14:08:44.987409 1086283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1013 14:08:44.997768 1086283 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1013 14:08:45.005876 1086283 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1013 14:08:45.014037 1086283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1013 14:08:45.094383 1086283 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1013 14:08:45.171556 1086283 start.go:495] detecting cgroup driver to use...
I1013 14:08:45.171601 1086283 detect.go:190] detected "systemd" cgroup driver on host os
I1013 14:08:45.171654 1086283 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1013 14:08:45.185528 1086283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1013 14:08:45.198805 1086283 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1013 14:08:45.216403 1086283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1013 14:08:45.229591 1086283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1013 14:08:45.243295 1086283 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1013 14:08:45.258766 1086283 ssh_runner.go:195] Run: which cri-dockerd
I1013 14:08:45.262880 1086283 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1013 14:08:45.273871 1086283 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I1013 14:08:45.287662 1086283 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1013 14:08:45.370858 1086283 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1013 14:08:45.452263 1086283 docker.go:575] configuring docker to use "systemd" as cgroup driver...
I1013 14:08:45.452373 1086283 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
I1013 14:08:45.466738 1086283 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1013 14:08:45.479698 1086283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1013 14:08:45.565671 1086283 ssh_runner.go:195] Run: sudo systemctl restart docker
I1013 14:08:46.392608 1086283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1013 14:08:46.406393 1086283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1013 14:08:46.420579 1086283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1013 14:08:46.434768 1086283 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1013 14:08:46.525313 1086283 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1013 14:08:46.615312 1086283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1013 14:08:46.702670 1086283 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1013 14:08:46.733765 1086283 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
I1013 14:08:46.747902 1086283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1013 14:08:46.830429 1086283 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I1013 14:08:46.907369 1086283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1013 14:08:46.921489 1086283 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1013 14:08:46.921553 1086283 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1013 14:08:46.926129 1086283 start.go:563] Will wait 60s for crictl version
I1013 14:08:46.926212 1086283 ssh_runner.go:195] Run: which crictl
I1013 14:08:46.930314 1086283 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1013 14:08:46.958471 1086283 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 28.5.0
RuntimeApiVersion: v1
I1013 14:08:46.958520 1086283 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1013 14:08:46.986579 1086283 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1013 14:08:47.016332 1086283 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.5.0 ...
I1013 14:08:47.016402 1086283 cli_runner.go:164] Run: docker network inspect skaffold-600759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1013 14:08:47.034283 1086283 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1013 14:08:47.038966 1086283 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1013 14:08:47.049954 1086283 kubeadm.go:883] updating cluster {Name:skaffold-600759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:skaffold-600759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s} ...
I1013 14:08:47.050060 1086283 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1013 14:08:47.050130 1086283 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1013 14:08:47.073626 1086283 docker.go:691] Got preloaded images: -- stdout --
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/pause:3.10.1
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1013 14:08:47.073639 1086283 docker.go:621] Images already preloaded, skipping extraction
I1013 14:08:47.073693 1086283 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1013 14:08:47.096076 1086283 docker.go:691] Got preloaded images: -- stdout --
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/pause:3.10.1
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1013 14:08:47.096109 1086283 cache_images.go:85] Images are preloaded, skipping loading
I1013 14:08:47.096138 1086283 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 docker true true} ...
I1013 14:08:47.096242 1086283 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=skaffold-600759 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:skaffold-600759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1013 14:08:47.096297 1086283 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1013 14:08:47.150071 1086283 cni.go:84] Creating CNI manager for ""
I1013 14:08:47.150116 1086283 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1013 14:08:47.150137 1086283 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1013 14:08:47.150157 1086283 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:skaffold-600759 NodeName:skaffold-600759 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1013 14:08:47.150279 1086283 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "skaffold-600759"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1013 14:08:47.150338 1086283 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1013 14:08:47.159249 1086283 binaries.go:44] Found k8s binaries, skipping transfer
I1013 14:08:47.159315 1086283 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1013 14:08:47.167601 1086283 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
I1013 14:08:47.181241 1086283 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1013 14:08:47.194629 1086283 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
I1013 14:08:47.207935 1086283 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1013 14:08:47.212018 1086283 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1013 14:08:47.223114 1086283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1013 14:08:47.306434 1086283 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1013 14:08:47.332778 1086283 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759 for IP: 192.168.76.2
I1013 14:08:47.332793 1086283 certs.go:195] generating shared ca certs ...
I1013 14:08:47.332813 1086283 certs.go:227] acquiring lock for ca certs: {Name:mk51a15d90077d4d48a4378abd8bb6ade742ad6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1013 14:08:47.332976 1086283 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-845765/.minikube/ca.key
I1013 14:08:47.333043 1086283 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-845765/.minikube/proxy-client-ca.key
I1013 14:08:47.333053 1086283 certs.go:257] generating profile certs ...
I1013 14:08:47.333139 1086283 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/client.key
I1013 14:08:47.333148 1086283 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/client.crt with IP's: []
I1013 14:08:47.700389 1086283 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/client.crt ...
I1013 14:08:47.700410 1086283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/client.crt: {Name:mkbb431e08bf484811890407f0abe3e51f985034 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1013 14:08:47.700613 1086283 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/client.key ...
I1013 14:08:47.700620 1086283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/client.key: {Name:mkc935bc5c7aa2d56c8f28ed99a4b2d46fee42e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1013 14:08:47.700707 1086283 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.key.58ae2b95
I1013 14:08:47.700719 1086283 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.crt.58ae2b95 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
I1013 14:08:48.806518 1086283 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.crt.58ae2b95 ...
I1013 14:08:48.806539 1086283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.crt.58ae2b95: {Name:mkf4341de32540907c173f93726610aec506f733 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1013 14:08:48.806713 1086283 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.key.58ae2b95 ...
I1013 14:08:48.806721 1086283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.key.58ae2b95: {Name:mk3d18ff48695ef27aed2eb30b60ddab347320b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1013 14:08:48.806797 1086283 certs.go:382] copying /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.crt.58ae2b95 -> /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.crt
I1013 14:08:48.806865 1086283 certs.go:386] copying /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.key.58ae2b95 -> /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.key
I1013 14:08:48.806944 1086283 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/proxy-client.key
I1013 14:08:48.806961 1086283 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/proxy-client.crt with IP's: []
I1013 14:08:48.981703 1086283 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/proxy-client.crt ...
I1013 14:08:48.981722 1086283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/proxy-client.crt: {Name:mk50c1e4fe0257783240bb92a889f9d60a6e497a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1013 14:08:48.981905 1086283 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/proxy-client.key ...
I1013 14:08:48.981912 1086283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/proxy-client.key: {Name:mkb5fd25ab20a93b76ba9a577d0ea5b4b05d3112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1013 14:08:48.982110 1086283 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-845765/.minikube/certs/849401.pem (1338 bytes)
W1013 14:08:48.982144 1086283 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-845765/.minikube/certs/849401_empty.pem, impossibly tiny 0 bytes
I1013 14:08:48.982151 1086283 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca-key.pem (1675 bytes)
I1013 14:08:48.982170 1086283 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-845765/.minikube/certs/ca.pem (1078 bytes)
I1013 14:08:48.982188 1086283 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-845765/.minikube/certs/cert.pem (1123 bytes)
I1013 14:08:48.982212 1086283 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-845765/.minikube/certs/key.pem (1675 bytes)
I1013 14:08:48.982255 1086283 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-845765/.minikube/files/etc/ssl/certs/8494012.pem (1708 bytes)
I1013 14:08:48.982902 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1013 14:08:49.003306 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1013 14:08:49.022917 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1013 14:08:49.043149 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1013 14:08:49.062193 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1013 14:08:49.081137 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1013 14:08:49.101071 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1013 14:08:49.121688 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/profiles/skaffold-600759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1013 14:08:49.140531 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1013 14:08:49.162565 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/certs/849401.pem --> /usr/share/ca-certificates/849401.pem (1338 bytes)
I1013 14:08:49.182449 1086283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-845765/.minikube/files/etc/ssl/certs/8494012.pem --> /usr/share/ca-certificates/8494012.pem (1708 bytes)
I1013 14:08:49.202000 1086283 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1013 14:08:49.216362 1086283 ssh_runner.go:195] Run: openssl version
I1013 14:08:49.223121 1086283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1013 14:08:49.232356 1086283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1013 14:08:49.236425 1086283 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 13:34 /usr/share/ca-certificates/minikubeCA.pem
I1013 14:08:49.236477 1086283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1013 14:08:49.271768 1086283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1013 14:08:49.281372 1086283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849401.pem && ln -fs /usr/share/ca-certificates/849401.pem /etc/ssl/certs/849401.pem"
I1013 14:08:49.290019 1086283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849401.pem
I1013 14:08:49.294041 1086283 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 13:39 /usr/share/ca-certificates/849401.pem
I1013 14:08:49.294137 1086283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849401.pem
I1013 14:08:49.328452 1086283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/849401.pem /etc/ssl/certs/51391683.0"
I1013 14:08:49.337814 1086283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8494012.pem && ln -fs /usr/share/ca-certificates/8494012.pem /etc/ssl/certs/8494012.pem"
I1013 14:08:49.346268 1086283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8494012.pem
I1013 14:08:49.350028 1086283 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 13:39 /usr/share/ca-certificates/8494012.pem
I1013 14:08:49.350067 1086283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8494012.pem
I1013 14:08:49.384347 1086283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8494012.pem /etc/ssl/certs/3ec20f2e.0"
I1013 14:08:49.392736 1086283 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1013 14:08:49.396371 1086283 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1013 14:08:49.396414 1086283 kubeadm.go:400] StartCluster: {Name:skaffold-600759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:skaffold-600759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
I1013 14:08:49.396510 1086283 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1013 14:08:49.415872 1086283 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1013 14:08:49.423600 1086283 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1013 14:08:49.431340 1086283 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1013 14:08:49.431395 1086283 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1013 14:08:49.438725 1086283 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1013 14:08:49.438735 1086283 kubeadm.go:157] found existing configuration files:
I1013 14:08:49.438778 1086283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1013 14:08:49.446074 1086283 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1013 14:08:49.446126 1086283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1013 14:08:49.453048 1086283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1013 14:08:49.460293 1086283 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1013 14:08:49.460329 1086283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1013 14:08:49.467361 1086283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1013 14:08:49.474685 1086283 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1013 14:08:49.474724 1086283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1013 14:08:49.481777 1086283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1013 14:08:49.489031 1086283 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1013 14:08:49.489074 1086283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1013 14:08:49.496033 1086283 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1013 14:08:49.562425 1086283 kubeadm.go:318] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
I1013 14:08:49.620325 1086283 kubeadm.go:318] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1013 14:08:58.612716 1086283 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1013 14:08:58.612759 1086283 kubeadm.go:318] [preflight] Running pre-flight checks
I1013 14:08:58.612835 1086283 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
I1013 14:08:58.612876 1086283 kubeadm.go:318] [0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
I1013 14:08:58.612902 1086283 kubeadm.go:318] [0;37mOS[0m: [0;32mLinux[0m
I1013 14:08:58.612936 1086283 kubeadm.go:318] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1013 14:08:58.612971 1086283 kubeadm.go:318] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1013 14:08:58.613039 1086283 kubeadm.go:318] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1013 14:08:58.613098 1086283 kubeadm.go:318] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1013 14:08:58.613151 1086283 kubeadm.go:318] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1013 14:08:58.613194 1086283 kubeadm.go:318] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1013 14:08:58.613232 1086283 kubeadm.go:318] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1013 14:08:58.613273 1086283 kubeadm.go:318] [0;37mCGROUPS_IO[0m: [0;32menabled[0m
I1013 14:08:58.613338 1086283 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1013 14:08:58.613457 1086283 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1013 14:08:58.613570 1086283 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1013 14:08:58.613629 1086283 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1013 14:08:58.615653 1086283 out.go:252] - Generating certificates and keys ...
I1013 14:08:58.615719 1086283 kubeadm.go:318] [certs] Using existing ca certificate authority
I1013 14:08:58.615768 1086283 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1013 14:08:58.615818 1086283 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
I1013 14:08:58.615860 1086283 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
I1013 14:08:58.615905 1086283 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
I1013 14:08:58.615943 1086283 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
I1013 14:08:58.616004 1086283 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
I1013 14:08:58.616157 1086283 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost skaffold-600759] and IPs [192.168.76.2 127.0.0.1 ::1]
I1013 14:08:58.616221 1086283 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
I1013 14:08:58.616335 1086283 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost skaffold-600759] and IPs [192.168.76.2 127.0.0.1 ::1]
I1013 14:08:58.616395 1086283 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
I1013 14:08:58.616463 1086283 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
I1013 14:08:58.616500 1086283 kubeadm.go:318] [certs] Generating "sa" key and public key
I1013 14:08:58.616544 1086283 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1013 14:08:58.616610 1086283 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1013 14:08:58.616716 1086283 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1013 14:08:58.616768 1086283 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1013 14:08:58.616820 1086283 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1013 14:08:58.616861 1086283 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1013 14:08:58.616940 1086283 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1013 14:08:58.617009 1086283 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1013 14:08:58.617983 1086283 out.go:252] - Booting up control plane ...
I1013 14:08:58.618055 1086283 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1013 14:08:58.618158 1086283 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1013 14:08:58.618225 1086283 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1013 14:08:58.618310 1086283 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1013 14:08:58.618384 1086283 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1013 14:08:58.618477 1086283 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1013 14:08:58.618542 1086283 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1013 14:08:58.618571 1086283 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1013 14:08:58.618701 1086283 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1013 14:08:58.618816 1086283 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1013 14:08:58.618894 1086283 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.842925ms
I1013 14:08:58.619012 1086283 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1013 14:08:58.619073 1086283 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
I1013 14:08:58.619200 1086283 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1013 14:08:58.619268 1086283 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1013 14:08:58.619341 1086283 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.385876401s
I1013 14:08:58.619407 1086283 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.005059747s
I1013 14:08:58.619479 1086283 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501184692s
I1013 14:08:58.619570 1086283 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1013 14:08:58.619709 1086283 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1013 14:08:58.619760 1086283 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
I1013 14:08:58.619943 1086283 kubeadm.go:318] [mark-control-plane] Marking the node skaffold-600759 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1013 14:08:58.620015 1086283 kubeadm.go:318] [bootstrap-token] Using token: piyn5s.2kp6jsrawp1uyq9s
I1013 14:08:58.621146 1086283 out.go:252] - Configuring RBAC rules ...
I1013 14:08:58.621250 1086283 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1013 14:08:58.621319 1086283 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1013 14:08:58.621452 1086283 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1013 14:08:58.621598 1086283 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1013 14:08:58.621754 1086283 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1013 14:08:58.621831 1086283 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1013 14:08:58.621932 1086283 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1013 14:08:58.621981 1086283 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
I1013 14:08:58.622025 1086283 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
I1013 14:08:58.622027 1086283 kubeadm.go:318]
I1013 14:08:58.622076 1086283 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
I1013 14:08:58.622080 1086283 kubeadm.go:318]
I1013 14:08:58.622181 1086283 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
I1013 14:08:58.622185 1086283 kubeadm.go:318]
I1013 14:08:58.622209 1086283 kubeadm.go:318] mkdir -p $HOME/.kube
I1013 14:08:58.622256 1086283 kubeadm.go:318] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1013 14:08:58.622300 1086283 kubeadm.go:318] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1013 14:08:58.622302 1086283 kubeadm.go:318]
I1013 14:08:58.622346 1086283 kubeadm.go:318] Alternatively, if you are the root user, you can run:
I1013 14:08:58.622356 1086283 kubeadm.go:318]
I1013 14:08:58.622394 1086283 kubeadm.go:318] export KUBECONFIG=/etc/kubernetes/admin.conf
I1013 14:08:58.622397 1086283 kubeadm.go:318]
I1013 14:08:58.622448 1086283 kubeadm.go:318] You should now deploy a pod network to the cluster.
I1013 14:08:58.622509 1086283 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1013 14:08:58.622561 1086283 kubeadm.go:318] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1013 14:08:58.622564 1086283 kubeadm.go:318]
I1013 14:08:58.622630 1086283 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
I1013 14:08:58.622695 1086283 kubeadm.go:318] and service account keys on each node and then running the following as root:
I1013 14:08:58.622698 1086283 kubeadm.go:318]
I1013 14:08:58.622768 1086283 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token piyn5s.2kp6jsrawp1uyq9s \
I1013 14:08:58.622860 1086283 kubeadm.go:318] --discovery-token-ca-cert-hash sha256:16d9a7410241b2acfdff9ea6415bd20df136db6f360e1d41e81cf20406588c23 \
I1013 14:08:58.622876 1086283 kubeadm.go:318] --control-plane
I1013 14:08:58.622878 1086283 kubeadm.go:318]
I1013 14:08:58.622947 1086283 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
I1013 14:08:58.622949 1086283 kubeadm.go:318]
I1013 14:08:58.623058 1086283 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token piyn5s.2kp6jsrawp1uyq9s \
I1013 14:08:58.623232 1086283 kubeadm.go:318] --discovery-token-ca-cert-hash sha256:16d9a7410241b2acfdff9ea6415bd20df136db6f360e1d41e81cf20406588c23
I1013 14:08:58.623240 1086283 cni.go:84] Creating CNI manager for ""
I1013 14:08:58.623260 1086283 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1013 14:08:58.624426 1086283 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1013 14:08:58.625368 1086283 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1013 14:08:58.634115 1086283 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1013 14:08:58.647804 1086283 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1013 14:08:58.647859 1086283 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1013 14:08:58.647887 1086283 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes skaffold-600759 minikube.k8s.io/updated_at=2025_10_13T14_08_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=skaffold-600759 minikube.k8s.io/primary=true
I1013 14:08:58.658142 1086283 ops.go:34] apiserver oom_adj: -16
I1013 14:08:58.723450 1086283 kubeadm.go:1113] duration metric: took 75.631359ms to wait for elevateKubeSystemPrivileges
I1013 14:08:58.739983 1086283 kubeadm.go:402] duration metric: took 9.343562645s to StartCluster
I1013 14:08:58.740016 1086283 settings.go:142] acquiring lock: {Name:mk24de2af2bc4af7e814eea58e5a79fdffd1539a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1013 14:08:58.740123 1086283 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21724-845765/kubeconfig
I1013 14:08:58.740820 1086283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-845765/kubeconfig: {Name:mk457195fd43ec40c74fabe4f2e22723d064915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1013 14:08:58.741018 1086283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1013 14:08:58.741037 1086283 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I1013 14:08:58.741100 1086283 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1013 14:08:58.741193 1086283 addons.go:69] Setting storage-provisioner=true in profile "skaffold-600759"
I1013 14:08:58.741211 1086283 addons.go:238] Setting addon storage-provisioner=true in "skaffold-600759"
I1013 14:08:58.741219 1086283 addons.go:69] Setting default-storageclass=true in profile "skaffold-600759"
I1013 14:08:58.741233 1086283 config.go:182] Loaded profile config "skaffold-600759": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1013 14:08:58.741243 1086283 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "skaffold-600759"
I1013 14:08:58.741246 1086283 host.go:66] Checking if "skaffold-600759" exists ...
I1013 14:08:58.741620 1086283 cli_runner.go:164] Run: docker container inspect skaffold-600759 --format={{.State.Status}}
I1013 14:08:58.741744 1086283 cli_runner.go:164] Run: docker container inspect skaffold-600759 --format={{.State.Status}}
I1013 14:08:58.742534 1086283 out.go:179] * Verifying Kubernetes components...
I1013 14:08:58.743703 1086283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1013 14:08:58.764989 1086283 addons.go:238] Setting addon default-storageclass=true in "skaffold-600759"
I1013 14:08:58.765026 1086283 host.go:66] Checking if "skaffold-600759" exists ...
I1013 14:08:58.765559 1086283 cli_runner.go:164] Run: docker container inspect skaffold-600759 --format={{.State.Status}}
I1013 14:08:58.766157 1086283 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1013 14:08:58.767975 1086283 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1013 14:08:58.767987 1086283 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1013 14:08:58.768047 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
I1013 14:08:58.791205 1086283 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I1013 14:08:58.791254 1086283 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1013 14:08:58.791347 1086283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-600759
I1013 14:08:58.800202 1086283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa Username:docker}
I1013 14:08:58.814291 1086283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21724-845765/.minikube/machines/skaffold-600759/id_rsa Username:docker}
I1013 14:08:58.834656 1086283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1013 14:08:58.887414 1086283 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1013 14:08:58.919640 1086283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1013 14:08:58.929049 1086283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1013 14:08:59.015053 1086283 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
I1013 14:08:59.016008 1086283 api_server.go:52] waiting for apiserver process to appear ...
I1013 14:08:59.016065 1086283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1013 14:08:59.196149 1086283 api_server.go:72] duration metric: took 455.077315ms to wait for apiserver process to appear ...
I1013 14:08:59.196166 1086283 api_server.go:88] waiting for apiserver healthz status ...
I1013 14:08:59.196185 1086283 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1013 14:08:59.201750 1086283 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I1013 14:08:59.202720 1086283 api_server.go:141] control plane version: v1.34.1
I1013 14:08:59.202742 1086283 api_server.go:131] duration metric: took 6.570255ms to wait for apiserver health ...
I1013 14:08:59.202751 1086283 system_pods.go:43] waiting for kube-system pods to appear ...
I1013 14:08:59.203077 1086283 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
I1013 14:08:59.204222 1086283 addons.go:514] duration metric: took 463.127133ms for enable addons: enabled=[storage-provisioner default-storageclass]
I1013 14:08:59.205323 1086283 system_pods.go:59] 5 kube-system pods found
I1013 14:08:59.205347 1086283 system_pods.go:61] "etcd-skaffold-600759" [40779ea3-464f-4822-8adc-56ddb6a01424] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1013 14:08:59.205355 1086283 system_pods.go:61] "kube-apiserver-skaffold-600759" [1eb64043-99c7-4905-bb90-5f2737ddd669] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1013 14:08:59.205364 1086283 system_pods.go:61] "kube-controller-manager-skaffold-600759" [156fb531-9d0a-4cdc-a2de-0896c8417b0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1013 14:08:59.205375 1086283 system_pods.go:61] "kube-scheduler-skaffold-600759" [c2c56e97-d185-426d-9140-6fdbee90fb4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1013 14:08:59.205379 1086283 system_pods.go:61] "storage-provisioner" [a453fdce-cdf1-4d1d-a723-e4452a80c902] Pending
I1013 14:08:59.205385 1086283 system_pods.go:74] duration metric: took 2.629482ms to wait for pod list to return data ...
I1013 14:08:59.205396 1086283 kubeadm.go:586] duration metric: took 464.330959ms to wait for: map[apiserver:true system_pods:true]
I1013 14:08:59.205407 1086283 node_conditions.go:102] verifying NodePressure condition ...
I1013 14:08:59.207398 1086283 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1013 14:08:59.207412 1086283 node_conditions.go:123] node cpu capacity is 8
I1013 14:08:59.207423 1086283 node_conditions.go:105] duration metric: took 2.012663ms to run NodePressure ...
I1013 14:08:59.207441 1086283 start.go:241] waiting for startup goroutines ...
I1013 14:08:59.519194 1086283 kapi.go:214] "coredns" deployment in "kube-system" namespace and "skaffold-600759" context rescaled to 1 replicas
I1013 14:08:59.519222 1086283 start.go:246] waiting for cluster config update ...
I1013 14:08:59.519231 1086283 start.go:255] writing updated cluster config ...
I1013 14:08:59.519521 1086283 ssh_runner.go:195] Run: rm -f paused
I1013 14:08:59.568909 1086283 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
I1013 14:08:59.570714 1086283 out.go:179] * Done! kubectl is now configured to use "skaffold-600759" cluster and "default" namespace by default
==> Docker <==
Oct 13 14:08:46 skaffold-600759 dockerd[1051]: time="2025-10-13T14:08:46.390358603Z" level=info msg="API listen on /var/run/docker.sock"
Oct 13 14:08:46 skaffold-600759 dockerd[1051]: time="2025-10-13T14:08:46.390364508Z" level=info msg="API listen on /run/docker.sock"
Oct 13 14:08:46 skaffold-600759 systemd[1]: Started docker.service - Docker Application Container Engine.
Oct 13 14:08:46 skaffold-600759 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Starting cri-dockerd dev (HEAD)"
Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Start docker client with request timeout 0s"
Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Hairpin mode is set to hairpin-veth"
Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Loaded network plugin cni"
Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Docker cri networking managed by network plugin cni"
Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Setting cgroupDriver systemd"
Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
Oct 13 14:08:46 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:46Z" level=info msg="Start cri-dockerd grpc backend"
Oct 13 14:08:46 skaffold-600759 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
Oct 13 14:08:54 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06711b21443f3e7f75ec83e4b950369e5e3278dc1b769cff42cadba987eb3cd9/resolv.conf as [nameserver 192.168.76.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
Oct 13 14:08:54 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0ada0908093382492f69da54fba2eb9e47350751af1cb906d6ffc221fc12ed83/resolv.conf as [nameserver 192.168.76.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
Oct 13 14:08:54 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/97e4b6282c162a368eda1fc8a71586c70983e35e7348a2e503073974177261a9/resolv.conf as [nameserver 192.168.76.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
Oct 13 14:08:54 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:08:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/21cd502124f3eb46d78bc73e590e4b8459c33bd93154a0bdff8367eea17e1b16/resolv.conf as [nameserver 192.168.76.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
Oct 13 14:09:04 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:09:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f6b12fae8865d7d86c65ce21a6249c1828d7d0319f0b9365aab98a8f773891cc/resolv.conf as [nameserver 192.168.76.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
Oct 13 14:09:04 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:09:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/21fdebb22a254ab52aafd31d33b117a63b59011b6b494e5da2e04ea6b324e135/resolv.conf as [nameserver 192.168.76.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
Oct 13 14:09:04 skaffold-600759 cri-dockerd[1360]: time="2025-10-13T14:09:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bedd2ec989836bef730a523fe5327090245635aad785ffac25d05c3a1b0028d5/resolv.conf as [nameserver 192.168.76.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
Oct 13 14:09:05 skaffold-600759 dockerd[1051]: time="2025-10-13T14:09:05.220281860Z" level=info msg="Layer sha256:93bb432fc635ff65b22d8fd06065779d21d54079752b73e679b66e22eb809875 cleaned up"
Oct 13 14:09:05 skaffold-600759 dockerd[1051]: time="2025-10-13T14:09:05.254227070Z" level=info msg="Layer sha256:93bb432fc635ff65b22d8fd06065779d21d54079752b73e679b66e22eb809875 cleaned up"
Oct 13 14:09:06 skaffold-600759 dockerd[1051]: time="2025-10-13T14:09:06.411063046Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your pull rate limit as 'minikubebot': dckr_jti_W89jo-sMmu2ZeG4U1lTVn5LowXk=. You may increase the limit by upgrading. https://www.docker.com/increase-rate-limit"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
bcfcab7ad779b 52546a367cc9e 3 seconds ago Running coredns 0 bedd2ec989836 coredns-66bc5c9577-gzcdr kube-system
68a7639d0dd49 6e38f40d628db 3 seconds ago Running storage-provisioner 0 21fdebb22a254 storage-provisioner kube-system
aefcb6ad4091a fc25172553d79 3 seconds ago Running kube-proxy 0 f6b12fae8865d kube-proxy-g29j8 kube-system
d1735c532b549 c3994bc696102 13 seconds ago Running kube-apiserver 0 97e4b6282c162 kube-apiserver-skaffold-600759 kube-system
80ad111ab1f84 7dd6aaa1717ab 13 seconds ago Running kube-scheduler 0 21cd502124f3e kube-scheduler-skaffold-600759 kube-system
9c2bb3b038bb1 5f1f5298c888d 13 seconds ago Running etcd 0 0ada090809338 etcd-skaffold-600759 kube-system
0e5e0ebf97f9c c80c8dbafe7dd 13 seconds ago Running kube-controller-manager 0 06711b21443f3 kube-controller-manager-skaffold-600759 kube-system
==> coredns [bcfcab7ad779] <==
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
==> describe nodes <==
Name: skaffold-600759
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=skaffold-600759
kubernetes.io/os=linux
minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
minikube.k8s.io/name=skaffold-600759
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_10_13T14_08_58_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 13 Oct 2025 14:08:55 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: skaffold-600759
AcquireTime: <unset>
RenewTime: Mon, 13 Oct 2025 14:08:57 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 13 Oct 2025 14:09:01 +0000 Mon, 13 Oct 2025 14:08:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 13 Oct 2025 14:09:01 +0000 Mon, 13 Oct 2025 14:08:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 13 Oct 2025 14:09:01 +0000 Mon, 13 Oct 2025 14:08:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 13 Oct 2025 14:09:01 +0000 Mon, 13 Oct 2025 14:09:01 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: skaffold-600759
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863448Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863448Ki
pods: 110
System Info:
Machine ID: ae5563d544f246dfb2debce30ea7e52f
System UUID: 2721505f-91c0-410c-83f1-ad2dac5d9d90
Boot ID: 11a94ccc-a4cf-476c-b883-d77264fdee8f
Kernel Version: 6.8.0-1041-gcp
OS Image: Debian GNU/Linux 12 (bookworm)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://28.5.0
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-66bc5c9577-gzcdr 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 4s
kube-system etcd-skaffold-600759 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 10s
kube-system kube-apiserver-skaffold-600759 250m (3%) 0 (0%) 0 (0%) 0 (0%) 10s
kube-system kube-controller-manager-skaffold-600759 200m (2%) 0 (0%) 0 (0%) 0 (0%) 10s
kube-system kube-proxy-g29j8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4s
kube-system kube-scheduler-skaffold-600759 100m (1%) 0 (0%) 0 (0%) 0 (0%) 10s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%) 0 (0%)
memory 170Mi (0%) 170Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2s kube-proxy
Normal Starting 14s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 14s (x8 over 14s) kubelet Node skaffold-600759 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 14s (x8 over 14s) kubelet Node skaffold-600759 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 14s (x7 over 14s) kubelet Node skaffold-600759 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 14s kubelet Updated Node Allocatable limit across pods
Normal Starting 10s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 10s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 10s kubelet Node skaffold-600759 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 10s kubelet Node skaffold-600759 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 10s kubelet Node skaffold-600759 status is now: NodeHasSufficientPID
Normal NodeReady 6s kubelet Node skaffold-600759 status is now: NodeReady
Normal RegisteredNode 5s node-controller Node skaffold-600759 event: Registered Node skaffold-600759 in Controller
==> dmesg <==
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 39 35 f3 64 4d 08 06
[ +0.000647] IPv4: martian source 10.244.0.31 from 10.244.0.7, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 55 e3 76 26 92 08 06
[ +9.892718] IPv4: martian source 10.244.0.32 from 10.244.0.25, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 2a 99 64 46 3d 08 06
[Oct13 13:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a b5 fe ff b2 ae 08 06
[Oct13 13:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000010] ll header: 00000000: ff ff ff ff ff ff 26 db 2e 1c c1 c4 08 06
[Oct13 13:42] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff da 65 0d 99 f7 7a 08 06
[ +32.969545] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
[ +0.000010] ll header: 00000000: ff ff ff ff ff ff fa 4a b8 fd bd a0 08 06
[Oct13 13:55] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff ca 58 c0 08 14 66 08 06
[Oct13 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 77 d6 2c 21 9b 08 06
[Oct13 14:05] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 3d ac a7 6f 32 08 06
[Oct13 14:06] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a c4 7c 38 f9 7e 08 06
[Oct13 14:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 46 2f 41 9a a9 08 06
[Oct13 14:09] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 17 07 d9 db f9 08 06
==> etcd [9c2bb3b038bb] <==
{"level":"warn","ts":"2025-10-13T14:08:55.115742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36654","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-13T14:08:55.123232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36700","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-13T14:08:55.129181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36704","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-13T14:08:55.135059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36728","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-13T14:08:55.141139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36744","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-13T14:08:55.147718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36768","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-13T14:08:55.154739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36788","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-13T14:08:55.160823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36812","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-13T14:08:55.167743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36840","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-13T14:08:55.174160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36868","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-13T14:08:55.180875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36876","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-13T14:08:55.188210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36894","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-13T14:08:55.195392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36912","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-13T14:08:55.217128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36934","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-13T14:08:55.223262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36966","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-13T14:08:55.230453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36970","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-13T14:08:55.278964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36994","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-10-13T14:09:03.779527Z","caller":"traceutil/trace.go:172","msg":"trace[2119658154] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"131.256101ms","start":"2025-10-13T14:09:03.648252Z","end":"2025-10-13T14:09:03.779508Z","steps":["trace[2119658154] 'process raft request' (duration: 131.121046ms)"],"step_count":1}
{"level":"info","ts":"2025-10-13T14:09:03.779533Z","caller":"traceutil/trace.go:172","msg":"trace[1645608779] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"131.433097ms","start":"2025-10-13T14:09:03.648064Z","end":"2025-10-13T14:09:03.779497Z","steps":["trace[1645608779] 'process raft request' (duration: 106.220281ms)","trace[1645608779] 'compare' (duration: 24.940003ms)"],"step_count":2}
{"level":"info","ts":"2025-10-13T14:09:03.942198Z","caller":"traceutil/trace.go:172","msg":"trace[2127528773] linearizableReadLoop","detail":"{readStateIndex:382; appliedIndex:382; }","duration":"152.84238ms","start":"2025-10-13T14:09:03.789321Z","end":"2025-10-13T14:09:03.942164Z","steps":["trace[2127528773] 'read index received' (duration: 152.832535ms)","trace[2127528773] 'applied index is now lower than readState.Index' (duration: 8.786µs)"],"step_count":2}
{"level":"warn","ts":"2025-10-13T14:09:03.945877Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.52648ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" limit:1 ","response":"range_response_count:1 size:197"}
{"level":"info","ts":"2025-10-13T14:09:03.945937Z","caller":"traceutil/trace.go:172","msg":"trace[1607692199] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"159.2917ms","start":"2025-10-13T14:09:03.786629Z","end":"2025-10-13T14:09:03.945921Z","steps":["trace[1607692199] 'process raft request' (duration: 155.679363ms)"],"step_count":1}
{"level":"info","ts":"2025-10-13T14:09:03.945964Z","caller":"traceutil/trace.go:172","msg":"trace[1249664331] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:371; }","duration":"156.640661ms","start":"2025-10-13T14:09:03.789312Z","end":"2025-10-13T14:09:03.945953Z","steps":["trace[1249664331] 'agreement among raft nodes before linearized reading' (duration: 152.93734ms)"],"step_count":1}
{"level":"info","ts":"2025-10-13T14:09:03.946009Z","caller":"traceutil/trace.go:172","msg":"trace[76491729] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"158.444253ms","start":"2025-10-13T14:09:03.787549Z","end":"2025-10-13T14:09:03.945994Z","steps":["trace[76491729] 'process raft request' (duration: 158.394947ms)"],"step_count":1}
{"level":"info","ts":"2025-10-13T14:09:03.946053Z","caller":"traceutil/trace.go:172","msg":"trace[1405593765] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"159.312938ms","start":"2025-10-13T14:09:03.786717Z","end":"2025-10-13T14:09:03.946030Z","steps":["trace[1405593765] 'process raft request' (duration: 159.158754ms)"],"step_count":1}
==> kernel <==
14:09:07 up 6:51, 0 user, load average: 1.03, 1.25, 8.79
Linux skaffold-600759 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kube-apiserver [d1735c532b54] <==
I1013 14:08:55.777504 1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
I1013 14:08:55.777539 1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
I1013 14:08:55.777649 1 shared_informer.go:356] "Caches are synced" controller="configmaps"
I1013 14:08:55.782403 1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
I1013 14:08:55.782821 1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
I1013 14:08:55.789279 1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
I1013 14:08:55.789419 1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
I1013 14:08:55.944560 1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
I1013 14:08:56.658426 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I1013 14:08:56.662889 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I1013 14:08:56.662908 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1013 14:08:57.110369 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1013 14:08:57.143543 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1013 14:08:57.270898 1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
W1013 14:08:57.276909 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I1013 14:08:57.277807 1 controller.go:667] quota admission added evaluator for: endpoints
I1013 14:08:57.281456 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1013 14:08:57.694382 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1013 14:08:58.012779 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1013 14:08:58.020664 1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
I1013 14:08:58.027156 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1013 14:09:02.947710 1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
I1013 14:09:02.951309 1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
I1013 14:09:03.394127 1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
I1013 14:09:03.786172 1 controller.go:667] quota admission added evaluator for: replicasets.apps
==> kube-controller-manager [0e5e0ebf97f9] <==
I1013 14:09:02.692336 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
I1013 14:09:02.692315 1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I1013 14:09:02.692436 1 shared_informer.go:356] "Caches are synced" controller="job"
I1013 14:09:02.692372 1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
I1013 14:09:02.692785 1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
I1013 14:09:02.694229 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1013 14:09:02.694319 1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
I1013 14:09:02.696690 1 shared_informer.go:356] "Caches are synced" controller="namespace"
I1013 14:09:02.696720 1 shared_informer.go:356] "Caches are synced" controller="node"
I1013 14:09:02.696819 1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
I1013 14:09:02.696872 1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
I1013 14:09:02.696883 1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
I1013 14:09:02.696890 1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
I1013 14:09:02.697012 1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
I1013 14:09:02.697082 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1013 14:09:02.698963 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
I1013 14:09:02.699000 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
I1013 14:09:02.700156 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
I1013 14:09:02.700187 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
I1013 14:09:02.703478 1 shared_informer.go:356] "Caches are synced" controller="stateful set"
I1013 14:09:02.706823 1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="skaffold-600759" podCIDRs=["10.244.0.0/24"]
I1013 14:09:02.709774 1 shared_informer.go:356] "Caches are synced" controller="PV protection"
I1013 14:09:02.717414 1 shared_informer.go:356] "Caches are synced" controller="deployment"
I1013 14:09:02.720675 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1013 14:09:02.723979 1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
==> kube-proxy [aefcb6ad4091] <==
I1013 14:09:04.272885 1 server_linux.go:53] "Using iptables proxy"
I1013 14:09:04.332484 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1013 14:09:04.432965 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1013 14:09:04.433006 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
E1013 14:09:04.433124 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1013 14:09:04.455616 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1013 14:09:04.455681 1 server_linux.go:132] "Using iptables Proxier"
I1013 14:09:04.463227 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1013 14:09:04.463636 1 server.go:527] "Version info" version="v1.34.1"
I1013 14:09:04.463664 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1013 14:09:04.465292 1 config.go:200] "Starting service config controller"
I1013 14:09:04.465310 1 config.go:106] "Starting endpoint slice config controller"
I1013 14:09:04.465328 1 config.go:403] "Starting serviceCIDR config controller"
I1013 14:09:04.465341 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1013 14:09:04.465350 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1013 14:09:04.465338 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1013 14:09:04.465507 1 config.go:309] "Starting node config controller"
I1013 14:09:04.465519 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1013 14:09:04.565586 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1013 14:09:04.565582 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1013 14:09:04.565614 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1013 14:09:04.565642 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [80ad111ab1f8] <==
E1013 14:08:55.704486 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1013 14:08:55.704511 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1013 14:08:55.704600 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1013 14:08:55.704611 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1013 14:08:55.704721 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1013 14:08:55.704720 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1013 14:08:55.704624 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1013 14:08:55.704856 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1013 14:08:55.704858 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1013 14:08:55.704910 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1013 14:08:56.509217 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1013 14:08:56.519253 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1013 14:08:56.577496 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1013 14:08:56.599679 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1013 14:08:56.670061 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1013 14:08:56.681143 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1013 14:08:56.724687 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1013 14:08:56.735773 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1013 14:08:56.754862 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1013 14:08:56.755815 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1013 14:08:56.813566 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1013 14:08:56.853714 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1013 14:08:56.912034 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1013 14:08:56.955179 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
I1013 14:08:58.801880 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Oct 13 14:08:58 skaffold-600759 kubelet[2252]: I1013 14:08:58.897282 2252 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-skaffold-600759"
Oct 13 14:08:58 skaffold-600759 kubelet[2252]: I1013 14:08:58.897802 2252 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-skaffold-600759"
Oct 13 14:08:58 skaffold-600759 kubelet[2252]: E1013 14:08:58.907525 2252 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-skaffold-600759\" already exists" pod="kube-system/kube-apiserver-skaffold-600759"
Oct 13 14:08:58 skaffold-600759 kubelet[2252]: E1013 14:08:58.909154 2252 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-skaffold-600759\" already exists" pod="kube-system/kube-scheduler-skaffold-600759"
Oct 13 14:08:58 skaffold-600759 kubelet[2252]: E1013 14:08:58.909401 2252 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-skaffold-600759\" already exists" pod="kube-system/kube-controller-manager-skaffold-600759"
Oct 13 14:08:58 skaffold-600759 kubelet[2252]: E1013 14:08:58.909427 2252 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-skaffold-600759\" already exists" pod="kube-system/etcd-skaffold-600759"
Oct 13 14:08:58 skaffold-600759 kubelet[2252]: I1013 14:08:58.920214 2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-skaffold-600759" podStartSLOduration=1.920196177 podStartE2EDuration="1.920196177s" podCreationTimestamp="2025-10-13 14:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 14:08:58.920041192 +0000 UTC m=+1.146198690" watchObservedRunningTime="2025-10-13 14:08:58.920196177 +0000 UTC m=+1.146353659"
Oct 13 14:08:58 skaffold-600759 kubelet[2252]: I1013 14:08:58.936730 2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-skaffold-600759" podStartSLOduration=1.9367093199999998 podStartE2EDuration="1.93670932s" podCreationTimestamp="2025-10-13 14:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 14:08:58.92799753 +0000 UTC m=+1.154155027" watchObservedRunningTime="2025-10-13 14:08:58.93670932 +0000 UTC m=+1.162866821"
Oct 13 14:08:58 skaffold-600759 kubelet[2252]: I1013 14:08:58.948620 2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-skaffold-600759" podStartSLOduration=1.948601117 podStartE2EDuration="1.948601117s" podCreationTimestamp="2025-10-13 14:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 14:08:58.937454107 +0000 UTC m=+1.163611607" watchObservedRunningTime="2025-10-13 14:08:58.948601117 +0000 UTC m=+1.174758607"
Oct 13 14:08:58 skaffold-600759 kubelet[2252]: I1013 14:08:58.948718 2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-skaffold-600759" podStartSLOduration=1.948714057 podStartE2EDuration="1.948714057s" podCreationTimestamp="2025-10-13 14:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 14:08:58.948389242 +0000 UTC m=+1.174546741" watchObservedRunningTime="2025-10-13 14:08:58.948714057 +0000 UTC m=+1.174871561"
Oct 13 14:09:01 skaffold-600759 kubelet[2252]: I1013 14:09:01.974015 2252 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
Oct 13 14:09:02 skaffold-600759 kubelet[2252]: I1013 14:09:02.774994 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwwqf\" (UniqueName: \"kubernetes.io/projected/a453fdce-cdf1-4d1d-a723-e4452a80c902-kube-api-access-hwwqf\") pod \"storage-provisioner\" (UID: \"a453fdce-cdf1-4d1d-a723-e4452a80c902\") " pod="kube-system/storage-provisioner"
Oct 13 14:09:02 skaffold-600759 kubelet[2252]: I1013 14:09:02.775038 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a453fdce-cdf1-4d1d-a723-e4452a80c902-tmp\") pod \"storage-provisioner\" (UID: \"a453fdce-cdf1-4d1d-a723-e4452a80c902\") " pod="kube-system/storage-provisioner"
Oct 13 14:09:02 skaffold-600759 kubelet[2252]: E1013 14:09:02.881616 2252 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
Oct 13 14:09:02 skaffold-600759 kubelet[2252]: E1013 14:09:02.881652 2252 projected.go:196] Error preparing data for projected volume kube-api-access-hwwqf for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
Oct 13 14:09:02 skaffold-600759 kubelet[2252]: E1013 14:09:02.881751 2252 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a453fdce-cdf1-4d1d-a723-e4452a80c902-kube-api-access-hwwqf podName:a453fdce-cdf1-4d1d-a723-e4452a80c902 nodeName:}" failed. No retries permitted until 2025-10-13 14:09:03.381720751 +0000 UTC m=+5.607878243 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hwwqf" (UniqueName: "kubernetes.io/projected/a453fdce-cdf1-4d1d-a723-e4452a80c902-kube-api-access-hwwqf") pod "storage-provisioner" (UID: "a453fdce-cdf1-4d1d-a723-e4452a80c902") : configmap "kube-root-ca.crt" not found
Oct 13 14:09:03 skaffold-600759 kubelet[2252]: I1013 14:09:03.479973 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b05dce05-44fa-4f30-99c0-2c28f61c280f-kube-proxy\") pod \"kube-proxy-g29j8\" (UID: \"b05dce05-44fa-4f30-99c0-2c28f61c280f\") " pod="kube-system/kube-proxy-g29j8"
Oct 13 14:09:03 skaffold-600759 kubelet[2252]: I1013 14:09:03.480017 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b05dce05-44fa-4f30-99c0-2c28f61c280f-xtables-lock\") pod \"kube-proxy-g29j8\" (UID: \"b05dce05-44fa-4f30-99c0-2c28f61c280f\") " pod="kube-system/kube-proxy-g29j8"
Oct 13 14:09:03 skaffold-600759 kubelet[2252]: I1013 14:09:03.480041 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbvvt\" (UniqueName: \"kubernetes.io/projected/b05dce05-44fa-4f30-99c0-2c28f61c280f-kube-api-access-gbvvt\") pod \"kube-proxy-g29j8\" (UID: \"b05dce05-44fa-4f30-99c0-2c28f61c280f\") " pod="kube-system/kube-proxy-g29j8"
Oct 13 14:09:03 skaffold-600759 kubelet[2252]: I1013 14:09:03.480070 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b05dce05-44fa-4f30-99c0-2c28f61c280f-lib-modules\") pod \"kube-proxy-g29j8\" (UID: \"b05dce05-44fa-4f30-99c0-2c28f61c280f\") " pod="kube-system/kube-proxy-g29j8"
Oct 13 14:09:04 skaffold-600759 kubelet[2252]: I1013 14:09:04.083650 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmzpl\" (UniqueName: \"kubernetes.io/projected/aa5cf1e1-9802-417f-8d07-d304450b9e93-kube-api-access-lmzpl\") pod \"coredns-66bc5c9577-gzcdr\" (UID: \"aa5cf1e1-9802-417f-8d07-d304450b9e93\") " pod="kube-system/coredns-66bc5c9577-gzcdr"
Oct 13 14:09:04 skaffold-600759 kubelet[2252]: I1013 14:09:04.083745 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa5cf1e1-9802-417f-8d07-d304450b9e93-config-volume\") pod \"coredns-66bc5c9577-gzcdr\" (UID: \"aa5cf1e1-9802-417f-8d07-d304450b9e93\") " pod="kube-system/coredns-66bc5c9577-gzcdr"
Oct 13 14:09:04 skaffold-600759 kubelet[2252]: I1013 14:09:04.954198 2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=5.954175776 podStartE2EDuration="5.954175776s" podCreationTimestamp="2025-10-13 14:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 14:09:04.94304504 +0000 UTC m=+7.169202541" watchObservedRunningTime="2025-10-13 14:09:04.954175776 +0000 UTC m=+7.180333276"
Oct 13 14:09:04 skaffold-600759 kubelet[2252]: I1013 14:09:04.967364 2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g29j8" podStartSLOduration=1.967341644 podStartE2EDuration="1.967341644s" podCreationTimestamp="2025-10-13 14:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 14:09:04.954363616 +0000 UTC m=+7.180521111" watchObservedRunningTime="2025-10-13 14:09:04.967341644 +0000 UTC m=+7.193499145"
Oct 13 14:09:04 skaffold-600759 kubelet[2252]: I1013 14:09:04.976931 2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gzcdr" podStartSLOduration=1.9769098550000002 podStartE2EDuration="1.976909855s" podCreationTimestamp="2025-10-13 14:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 14:09:04.967692532 +0000 UTC m=+7.193850027" watchObservedRunningTime="2025-10-13 14:09:04.976909855 +0000 UTC m=+7.203067354"
==> storage-provisioner [68a7639d0dd4] <==
I1013 14:09:04.226534 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p skaffold-600759 -n skaffold-600759
helpers_test.go:269: (dbg) Run: kubectl --context skaffold-600759 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestSkaffold FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "skaffold-600759" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p skaffold-600759
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-600759: (2.179644217s)
--- FAIL: TestSkaffold (37.44s)