=== RUN TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run: out/minikube-linux-arm64 start -p dockerenv-775346 --driver=docker --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-775346 --driver=docker --container-runtime=containerd: (31.079275264s)
docker_test.go:189: (dbg) Run: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-775346"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-775346": (1.095341208s)
docker_test.go:220: (dbg) Run: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-j1Y53nsAbvDS/agent.2805741" SSH_AGENT_PID="2805742" DOCKER_HOST=ssh://docker@127.0.0.1:36117 docker version"
docker_test.go:243: (dbg) Run: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-j1Y53nsAbvDS/agent.2805741" SSH_AGENT_PID="2805742" DOCKER_HOST=ssh://docker@127.0.0.1:36117 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-j1Y53nsAbvDS/agent.2805741" SSH_AGENT_PID="2805742" DOCKER_HOST=ssh://docker@127.0.0.1:36117 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": exit status 1 (975.155297ms)
-- stdout --
Sending build context to Docker daemon 2.048kB
-- /stdout --
** stderr **
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
environment-variable.
Error response from daemon: exit status 1
** /stderr **
docker_test.go:245: failed to build images, error: exit status 1, output:
-- stdout --
Sending build context to Docker daemon 2.048kB
-- /stdout --
** stderr **
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
environment-variable.
Error response from daemon: exit status 1
** /stderr **
docker_test.go:250: (dbg) Run: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-j1Y53nsAbvDS/agent.2805741" SSH_AGENT_PID="2805742" DOCKER_HOST=ssh://docker@127.0.0.1:36117 docker image ls"
docker_test.go:255: failed to detect image 'local/minikube-dockerenv-containerd-test' in output of docker image ls
panic.go:636: *** TestDockerEnvContainerd FAILED at 2025-10-02 20:59:46.216911048 +0000 UTC m=+450.981987465
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestDockerEnvContainerd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect dockerenv-775346
helpers_test.go:243: (dbg) docker inspect dockerenv-775346:
-- stdout --
[
{
"Id": "739a435e5129394374fc41d7be340e61f8f9faf70bd03826910d7f9c4c1f3dfb",
"Created": "2025-10-02T20:59:07.011043266Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 2803414,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-10-02T20:59:07.080612474Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
"ResolvConfPath": "/var/lib/docker/containers/739a435e5129394374fc41d7be340e61f8f9faf70bd03826910d7f9c4c1f3dfb/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/739a435e5129394374fc41d7be340e61f8f9faf70bd03826910d7f9c4c1f3dfb/hostname",
"HostsPath": "/var/lib/docker/containers/739a435e5129394374fc41d7be340e61f8f9faf70bd03826910d7f9c4c1f3dfb/hosts",
"LogPath": "/var/lib/docker/containers/739a435e5129394374fc41d7be340e61f8f9faf70bd03826910d7f9c4c1f3dfb/739a435e5129394374fc41d7be340e61f8f9faf70bd03826910d7f9c4c1f3dfb-json.log",
"Name": "/dockerenv-775346",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"dockerenv-775346:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "dockerenv-775346",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "739a435e5129394374fc41d7be340e61f8f9faf70bd03826910d7f9c4c1f3dfb",
"LowerDir": "/var/lib/docker/overlay2/d3a7e0770a4cc5a4a0aaeccc45ac25fd2bba799e559577eb0bd747692d1aae4f-init/diff:/var/lib/docker/overlay2/51331203fb22f22857c79ac4aca1f3d12d523fa3ef805f7f258c2d1849e728ca/diff",
"MergedDir": "/var/lib/docker/overlay2/d3a7e0770a4cc5a4a0aaeccc45ac25fd2bba799e559577eb0bd747692d1aae4f/merged",
"UpperDir": "/var/lib/docker/overlay2/d3a7e0770a4cc5a4a0aaeccc45ac25fd2bba799e559577eb0bd747692d1aae4f/diff",
"WorkDir": "/var/lib/docker/overlay2/d3a7e0770a4cc5a4a0aaeccc45ac25fd2bba799e559577eb0bd747692d1aae4f/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "dockerenv-775346",
"Source": "/var/lib/docker/volumes/dockerenv-775346/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "dockerenv-775346",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "dockerenv-775346",
"name.minikube.sigs.k8s.io": "dockerenv-775346",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "67621488a9e1741cb12668c061c12ad684d0a96998fa69be5905d0dad8fdc318",
"SandboxKey": "/var/run/docker/netns/67621488a9e1",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36117"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36118"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36121"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36119"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36120"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"dockerenv-775346": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "7e:4d:99:40:6e:b4",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "131a0fc9f153f6aafeb418a164a9cc1920e87a5107e75c0b7f4840fad8ec7a6a",
"EndpointID": "19bdd29dd2aece5a3d7dfc6da4c22e2b04f333ac183bc5b1178ef75fdd75d46d",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"dockerenv-775346",
"739a435e5129"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p dockerenv-775346 -n dockerenv-775346
helpers_test.go:252: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-arm64 -p dockerenv-775346 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p dockerenv-775346 logs -n 25: (1.024564542s)
helpers_test.go:260: TestDockerEnvContainerd logs:
-- stdout --
==> Audit <==
┌────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ip │ addons-774992 ip │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:57 UTC │
│ addons │ addons-774992 addons disable registry --alsologtostderr -v=1 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:57 UTC │
│ addons │ addons-774992 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:57 UTC │
│ ssh │ addons-774992 ssh cat /opt/local-path-provisioner/pvc-a72c6780-abc2-4dc3-9d6e-db75a010a533_default_test-pvc/file1 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:57 UTC │
│ addons │ addons-774992 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:58 UTC │
│ addons │ addons-774992 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:57 UTC │
│ addons │ addons-774992 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:58 UTC │
│ addons │ addons-774992 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
│ addons │ enable headlamp -p addons-774992 --alsologtostderr -v=1 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
│ addons │ addons-774992 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
│ addons │ addons-774992 addons disable headlamp --alsologtostderr -v=1 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
│ addons │ addons-774992 addons disable metrics-server --alsologtostderr -v=1 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-774992 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
│ addons │ addons-774992 addons disable registry-creds --alsologtostderr -v=1 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
│ ssh │ addons-774992 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
│ ip │ addons-774992 ip │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
│ addons │ addons-774992 addons disable ingress-dns --alsologtostderr -v=1 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
│ addons │ addons-774992 addons disable ingress --alsologtostderr -v=1 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
│ stop │ -p addons-774992 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
│ addons │ enable dashboard -p addons-774992 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
│ addons │ disable dashboard -p addons-774992 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
│ addons │ disable gvisor -p addons-774992 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
│ delete │ -p addons-774992 │ addons-774992 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:59 UTC │
│ start │ -p dockerenv-775346 --driver=docker --container-runtime=containerd │ dockerenv-775346 │ jenkins │ v1.37.0 │ 02 Oct 25 20:59 UTC │ 02 Oct 25 20:59 UTC │
│ docker-env │ --ssh-host --ssh-add -p dockerenv-775346 │ dockerenv-775346 │ jenkins │ v1.37.0 │ 02 Oct 25 20:59 UTC │ 02 Oct 25 20:59 UTC │
└────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/10/02 20:59:01
Running on machine: ip-172-31-31-251
Binary: Built with gc go1.24.6 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1002 20:59:01.671324 2803030 out.go:360] Setting OutFile to fd 1 ...
I1002 20:59:01.671424 2803030 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:59:01.671427 2803030 out.go:374] Setting ErrFile to fd 2...
I1002 20:59:01.671431 2803030 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:59:01.671785 2803030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
I1002 20:59:01.672246 2803030 out.go:368] Setting JSON to false
I1002 20:59:01.674095 2803030 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":60091,"bootTime":1759378651,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I1002 20:59:01.674156 2803030 start.go:140] virtualization:
I1002 20:59:01.679060 2803030 out.go:179] * [dockerenv-775346] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1002 20:59:01.684310 2803030 out.go:179] - MINIKUBE_LOCATION=21682
I1002 20:59:01.684354 2803030 notify.go:220] Checking for updates...
I1002 20:59:01.688139 2803030 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1002 20:59:01.691689 2803030 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig
I1002 20:59:01.695215 2803030 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube
I1002 20:59:01.698553 2803030 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1002 20:59:01.701917 2803030 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1002 20:59:01.705425 2803030 driver.go:421] Setting default libvirt URI to qemu:///system
I1002 20:59:01.736855 2803030 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
I1002 20:59:01.736974 2803030 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 20:59:01.797634 2803030 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 20:59:01.788187191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1002 20:59:01.797730 2803030 docker.go:318] overlay module found
I1002 20:59:01.801319 2803030 out.go:179] * Using the docker driver based on user configuration
I1002 20:59:01.804397 2803030 start.go:304] selected driver: docker
I1002 20:59:01.804404 2803030 start.go:924] validating driver "docker" against <nil>
I1002 20:59:01.804416 2803030 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1002 20:59:01.804534 2803030 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 20:59:01.865425 2803030 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 20:59:01.856520687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1002 20:59:01.865560 2803030 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1002 20:59:01.865816 2803030 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
I1002 20:59:01.865973 2803030 start_flags.go:984] Wait components to verify : map[apiserver:true system_pods:true]
I1002 20:59:01.869083 2803030 out.go:179] * Using Docker driver with root privileges
I1002 20:59:01.872260 2803030 cni.go:84] Creating CNI manager for ""
I1002 20:59:01.872322 2803030 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1002 20:59:01.872329 2803030 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1002 20:59:01.872415 2803030 start.go:348] cluster config:
{Name:dockerenv-775346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-775346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1002 20:59:01.875711 2803030 out.go:179] * Starting "dockerenv-775346" primary control-plane node in "dockerenv-775346" cluster
I1002 20:59:01.878739 2803030 cache.go:123] Beginning downloading kic base image for docker with containerd
I1002 20:59:01.881864 2803030 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
I1002 20:59:01.884810 2803030 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1002 20:59:01.884876 2803030 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
I1002 20:59:01.884884 2803030 cache.go:58] Caching tarball of preloaded images
I1002 20:59:01.884902 2803030 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
I1002 20:59:01.884989 2803030 preload.go:233] Found /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1002 20:59:01.884998 2803030 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
I1002 20:59:01.885346 2803030 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/config.json ...
I1002 20:59:01.885366 2803030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/config.json: {Name:mk067fa1d4bccb53f2d40a39c10ea94b3afa03dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:59:01.908922 2803030 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
I1002 20:59:01.908934 2803030 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
I1002 20:59:01.908947 2803030 cache.go:232] Successfully downloaded all kic artifacts
I1002 20:59:01.908967 2803030 start.go:360] acquireMachinesLock for dockerenv-775346: {Name:mkc978960753899d4d97eb2f18d1d9c1e4a59ed3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1002 20:59:01.909771 2803030 start.go:364] duration metric: took 783.612µs to acquireMachinesLock for "dockerenv-775346"
I1002 20:59:01.909805 2803030 start.go:93] Provisioning new machine with config: &{Name:dockerenv-775346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-775346 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1002 20:59:01.909873 2803030 start.go:125] createHost starting for "" (driver="docker")
I1002 20:59:01.913336 2803030 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1002 20:59:01.913592 2803030 start.go:159] libmachine.API.Create for "dockerenv-775346" (driver="docker")
I1002 20:59:01.913633 2803030 client.go:168] LocalClient.Create starting
I1002 20:59:01.913706 2803030 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem
I1002 20:59:01.913741 2803030 main.go:141] libmachine: Decoding PEM data...
I1002 20:59:01.913753 2803030 main.go:141] libmachine: Parsing certificate...
I1002 20:59:01.913808 2803030 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/cert.pem
I1002 20:59:01.913839 2803030 main.go:141] libmachine: Decoding PEM data...
I1002 20:59:01.913852 2803030 main.go:141] libmachine: Parsing certificate...
I1002 20:59:01.914242 2803030 cli_runner.go:164] Run: docker network inspect dockerenv-775346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 20:59:01.930731 2803030 cli_runner.go:211] docker network inspect dockerenv-775346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 20:59:01.930797 2803030 network_create.go:284] running [docker network inspect dockerenv-775346] to gather additional debugging logs...
I1002 20:59:01.930811 2803030 cli_runner.go:164] Run: docker network inspect dockerenv-775346
W1002 20:59:01.947446 2803030 cli_runner.go:211] docker network inspect dockerenv-775346 returned with exit code 1
I1002 20:59:01.947466 2803030 network_create.go:287] error running [docker network inspect dockerenv-775346]: docker network inspect dockerenv-775346: exit status 1
stdout:
[]
stderr:
Error response from daemon: network dockerenv-775346 not found
I1002 20:59:01.947478 2803030 network_create.go:289] output of [docker network inspect dockerenv-775346]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network dockerenv-775346 not found
** /stderr **
I1002 20:59:01.947572 2803030 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 20:59:01.964004 2803030 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001867db0}
I1002 20:59:01.964032 2803030 network_create.go:124] attempt to create docker network dockerenv-775346 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1002 20:59:01.964094 2803030 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-775346 dockerenv-775346
I1002 20:59:02.030036 2803030 network_create.go:108] docker network dockerenv-775346 192.168.49.0/24 created
I1002 20:59:02.030059 2803030 kic.go:121] calculated static IP "192.168.49.2" for the "dockerenv-775346" container
I1002 20:59:02.030134 2803030 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1002 20:59:02.045096 2803030 cli_runner.go:164] Run: docker volume create dockerenv-775346 --label name.minikube.sigs.k8s.io=dockerenv-775346 --label created_by.minikube.sigs.k8s.io=true
I1002 20:59:02.063083 2803030 oci.go:103] Successfully created a docker volume dockerenv-775346
I1002 20:59:02.063161 2803030 cli_runner.go:164] Run: docker run --rm --name dockerenv-775346-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-775346 --entrypoint /usr/bin/test -v dockerenv-775346:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
I1002 20:59:02.600678 2803030 oci.go:107] Successfully prepared a docker volume dockerenv-775346
I1002 20:59:02.600714 2803030 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1002 20:59:02.600731 2803030 kic.go:194] Starting extracting preloaded images to volume ...
I1002 20:59:02.600803 2803030 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v dockerenv-775346:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
I1002 20:59:06.942159 2803030 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v dockerenv-775346:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.341302563s)
I1002 20:59:06.942181 2803030 kic.go:203] duration metric: took 4.341445351s to extract preloaded images to volume ...
W1002 20:59:06.942611 2803030 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1002 20:59:06.942725 2803030 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1002 20:59:06.995408 2803030 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-775346 --name dockerenv-775346 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-775346 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-775346 --network dockerenv-775346 --ip 192.168.49.2 --volume dockerenv-775346:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
I1002 20:59:07.306844 2803030 cli_runner.go:164] Run: docker container inspect dockerenv-775346 --format={{.State.Running}}
I1002 20:59:07.326145 2803030 cli_runner.go:164] Run: docker container inspect dockerenv-775346 --format={{.State.Status}}
I1002 20:59:07.352401 2803030 cli_runner.go:164] Run: docker exec dockerenv-775346 stat /var/lib/dpkg/alternatives/iptables
I1002 20:59:07.399073 2803030 oci.go:144] the created container "dockerenv-775346" has a running status.
I1002 20:59:07.399102 2803030 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa...
I1002 20:59:07.721363 2803030 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1002 20:59:07.745626 2803030 cli_runner.go:164] Run: docker container inspect dockerenv-775346 --format={{.State.Status}}
I1002 20:59:07.775587 2803030 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1002 20:59:07.775599 2803030 kic_runner.go:114] Args: [docker exec --privileged dockerenv-775346 chown docker:docker /home/docker/.ssh/authorized_keys]
I1002 20:59:07.855965 2803030 cli_runner.go:164] Run: docker container inspect dockerenv-775346 --format={{.State.Status}}
I1002 20:59:07.878672 2803030 machine.go:93] provisionDockerMachine start ...
I1002 20:59:07.878817 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
I1002 20:59:07.903567 2803030 main.go:141] libmachine: Using SSH client type: native
I1002 20:59:07.903902 2803030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil> [] 0s} 127.0.0.1 36117 <nil> <nil>}
I1002 20:59:07.903910 2803030 main.go:141] libmachine: About to run SSH command:
hostname
I1002 20:59:07.904439 2803030 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40502->127.0.0.1:36117: read: connection reset by peer
I1002 20:59:11.035010 2803030 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-775346
I1002 20:59:11.035025 2803030 ubuntu.go:182] provisioning hostname "dockerenv-775346"
I1002 20:59:11.035085 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
I1002 20:59:11.058812 2803030 main.go:141] libmachine: Using SSH client type: native
I1002 20:59:11.059149 2803030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil> [] 0s} 127.0.0.1 36117 <nil> <nil>}
I1002 20:59:11.059158 2803030 main.go:141] libmachine: About to run SSH command:
sudo hostname dockerenv-775346 && echo "dockerenv-775346" | sudo tee /etc/hostname
I1002 20:59:11.201403 2803030 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-775346
I1002 20:59:11.201472 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
I1002 20:59:11.219623 2803030 main.go:141] libmachine: Using SSH client type: native
I1002 20:59:11.219947 2803030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil> [] 0s} 127.0.0.1 36117 <nil> <nil>}
I1002 20:59:11.219962 2803030 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sdockerenv-775346' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-775346/g' /etc/hosts;
else
echo '127.0.1.1 dockerenv-775346' | sudo tee -a /etc/hosts;
fi
fi
I1002 20:59:11.351549 2803030 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1002 20:59:11.351568 2803030 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-2783765/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-2783765/.minikube}
I1002 20:59:11.351591 2803030 ubuntu.go:190] setting up certificates
I1002 20:59:11.351600 2803030 provision.go:84] configureAuth start
I1002 20:59:11.351657 2803030 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-775346
I1002 20:59:11.367935 2803030 provision.go:143] copyHostCerts
I1002 20:59:11.367994 2803030 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.pem, removing ...
I1002 20:59:11.368002 2803030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.pem
I1002 20:59:11.368082 2803030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.pem (1078 bytes)
I1002 20:59:11.368171 2803030 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-2783765/.minikube/cert.pem, removing ...
I1002 20:59:11.368175 2803030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-2783765/.minikube/cert.pem
I1002 20:59:11.368198 2803030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-2783765/.minikube/cert.pem (1123 bytes)
I1002 20:59:11.368245 2803030 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-2783765/.minikube/key.pem, removing ...
I1002 20:59:11.368248 2803030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-2783765/.minikube/key.pem
I1002 20:59:11.368279 2803030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-2783765/.minikube/key.pem (1675 bytes)
I1002 20:59:11.368326 2803030 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca-key.pem org=jenkins.dockerenv-775346 san=[127.0.0.1 192.168.49.2 dockerenv-775346 localhost minikube]
I1002 20:59:11.541388 2803030 provision.go:177] copyRemoteCerts
I1002 20:59:11.541447 2803030 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1002 20:59:11.541484 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
I1002 20:59:11.558660 2803030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36117 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa Username:docker}
I1002 20:59:11.655096 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1002 20:59:11.673547 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
I1002 20:59:11.691398 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1002 20:59:11.708786 2803030 provision.go:87] duration metric: took 357.161611ms to configureAuth
I1002 20:59:11.708802 2803030 ubuntu.go:206] setting minikube options for container-runtime
I1002 20:59:11.708986 2803030 config.go:182] Loaded profile config "dockerenv-775346": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 20:59:11.708991 2803030 machine.go:96] duration metric: took 3.830309767s to provisionDockerMachine
I1002 20:59:11.708996 2803030 client.go:171] duration metric: took 9.795359097s to LocalClient.Create
I1002 20:59:11.709034 2803030 start.go:167] duration metric: took 9.795431393s to libmachine.API.Create "dockerenv-775346"
I1002 20:59:11.709041 2803030 start.go:293] postStartSetup for "dockerenv-775346" (driver="docker")
I1002 20:59:11.709049 2803030 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1002 20:59:11.709106 2803030 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1002 20:59:11.709146 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
I1002 20:59:11.725982 2803030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36117 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa Username:docker}
I1002 20:59:11.823260 2803030 ssh_runner.go:195] Run: cat /etc/os-release
I1002 20:59:11.826367 2803030 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1002 20:59:11.826384 2803030 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1002 20:59:11.826394 2803030 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-2783765/.minikube/addons for local assets ...
I1002 20:59:11.826449 2803030 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-2783765/.minikube/files for local assets ...
I1002 20:59:11.826467 2803030 start.go:296] duration metric: took 117.421232ms for postStartSetup
I1002 20:59:11.826773 2803030 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-775346
I1002 20:59:11.842967 2803030 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/config.json ...
I1002 20:59:11.843245 2803030 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1002 20:59:11.843308 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
I1002 20:59:11.859881 2803030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36117 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa Username:docker}
I1002 20:59:11.952845 2803030 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1002 20:59:11.957530 2803030 start.go:128] duration metric: took 10.047643757s to createHost
I1002 20:59:11.957543 2803030 start.go:83] releasing machines lock for "dockerenv-775346", held for 10.047760952s
I1002 20:59:11.957621 2803030 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-775346
I1002 20:59:11.973891 2803030 ssh_runner.go:195] Run: cat /version.json
I1002 20:59:11.973936 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
I1002 20:59:11.974185 2803030 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1002 20:59:11.974238 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
I1002 20:59:11.993173 2803030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36117 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa Username:docker}
I1002 20:59:11.993625 2803030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36117 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa Username:docker}
I1002 20:59:12.175449 2803030 ssh_runner.go:195] Run: systemctl --version
I1002 20:59:12.182024 2803030 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1002 20:59:12.186532 2803030 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1002 20:59:12.186592 2803030 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1002 20:59:12.217077 2803030 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1002 20:59:12.217090 2803030 start.go:495] detecting cgroup driver to use...
I1002 20:59:12.217132 2803030 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1002 20:59:12.217184 2803030 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1002 20:59:12.232020 2803030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1002 20:59:12.245277 2803030 docker.go:218] disabling cri-docker service (if available) ...
I1002 20:59:12.245336 2803030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1002 20:59:12.262770 2803030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1002 20:59:12.281237 2803030 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1002 20:59:12.404819 2803030 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1002 20:59:12.529325 2803030 docker.go:234] disabling docker service ...
I1002 20:59:12.529380 2803030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1002 20:59:12.551153 2803030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1002 20:59:12.564545 2803030 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1002 20:59:12.681253 2803030 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1002 20:59:12.807131 2803030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1002 20:59:12.819761 2803030 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1002 20:59:12.834998 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1002 20:59:12.844551 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1002 20:59:12.853096 2803030 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1002 20:59:12.853153 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1002 20:59:12.861856 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1002 20:59:12.871160 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1002 20:59:12.879672 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1002 20:59:12.888240 2803030 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1002 20:59:12.896481 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1002 20:59:12.904979 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1002 20:59:12.913668 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1002 20:59:12.922315 2803030 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1002 20:59:12.929751 2803030 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1002 20:59:12.937259 2803030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1002 20:59:13.056876 2803030 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1002 20:59:13.201401 2803030 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1002 20:59:13.201480 2803030 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1002 20:59:13.205718 2803030 start.go:563] Will wait 60s for crictl version
I1002 20:59:13.205804 2803030 ssh_runner.go:195] Run: which crictl
I1002 20:59:13.209445 2803030 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1002 20:59:13.242985 2803030 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.28
RuntimeApiVersion: v1
I1002 20:59:13.243051 2803030 ssh_runner.go:195] Run: containerd --version
I1002 20:59:13.266250 2803030 ssh_runner.go:195] Run: containerd --version
I1002 20:59:13.292452 2803030 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
I1002 20:59:13.295463 2803030 cli_runner.go:164] Run: docker network inspect dockerenv-775346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 20:59:13.309817 2803030 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1002 20:59:13.313323 2803030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1002 20:59:13.322582 2803030 kubeadm.go:883] updating cluster {Name:dockerenv-775346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-775346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1002 20:59:13.322683 2803030 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1002 20:59:13.322744 2803030 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 20:59:13.348718 2803030 containerd.go:627] all images are preloaded for containerd runtime.
I1002 20:59:13.348730 2803030 containerd.go:534] Images already preloaded, skipping extraction
I1002 20:59:13.348789 2803030 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 20:59:13.373373 2803030 containerd.go:627] all images are preloaded for containerd runtime.
I1002 20:59:13.373385 2803030 cache_images.go:85] Images are preloaded, skipping loading
I1002 20:59:13.373391 2803030 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 containerd true true} ...
I1002 20:59:13.373494 2803030 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=dockerenv-775346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:dockerenv-775346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1002 20:59:13.373554 2803030 ssh_runner.go:195] Run: sudo crictl info
I1002 20:59:13.398846 2803030 cni.go:84] Creating CNI manager for ""
I1002 20:59:13.398856 2803030 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1002 20:59:13.398870 2803030 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1002 20:59:13.398890 2803030 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-775346 NodeName:dockerenv-775346 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1002 20:59:13.399007 2803030 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "dockerenv-775346"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1002 20:59:13.399072 2803030 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1002 20:59:13.407630 2803030 binaries.go:44] Found k8s binaries, skipping transfer
I1002 20:59:13.407691 2803030 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1002 20:59:13.415357 2803030 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
I1002 20:59:13.428313 2803030 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1002 20:59:13.441479 2803030 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
I1002 20:59:13.454287 2803030 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1002 20:59:13.457843 2803030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1002 20:59:13.467682 2803030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1002 20:59:13.583616 2803030 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1002 20:59:13.600252 2803030 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346 for IP: 192.168.49.2
I1002 20:59:13.600263 2803030 certs.go:195] generating shared ca certs ...
I1002 20:59:13.600287 2803030 certs.go:227] acquiring lock for ca certs: {Name:mk9dd0ab4a99d312fca91f03b1dec8574d28a55e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:59:13.600459 2803030 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.key
I1002 20:59:13.600511 2803030 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/proxy-client-ca.key
I1002 20:59:13.600517 2803030 certs.go:257] generating profile certs ...
I1002 20:59:13.600582 2803030 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/client.key
I1002 20:59:13.600598 2803030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/client.crt with IP's: []
I1002 20:59:14.302356 2803030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/client.crt ...
I1002 20:59:14.302372 2803030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/client.crt: {Name:mkf4058f63a7f563447b3efb417d68ab79bee39f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:59:14.302572 2803030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/client.key ...
I1002 20:59:14.302578 2803030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/client.key: {Name:mke1efa342f8c4475fc37fae4c481852494f8fe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:59:14.302668 2803030 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.key.ac0d1c47
I1002 20:59:14.302679 2803030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.crt.ac0d1c47 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1002 20:59:15.255311 2803030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.crt.ac0d1c47 ...
I1002 20:59:15.255333 2803030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.crt.ac0d1c47: {Name:mkd6ff2ce78e54296bdd851b535714f0b3de5bc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:59:15.255531 2803030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.key.ac0d1c47 ...
I1002 20:59:15.255541 2803030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.key.ac0d1c47: {Name:mkb3e9b9904eea83f9d6e1e29864a6615a2b4440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:59:15.256294 2803030 certs.go:382] copying /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.crt.ac0d1c47 -> /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.crt
I1002 20:59:15.256374 2803030 certs.go:386] copying /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.key.ac0d1c47 -> /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.key
I1002 20:59:15.256427 2803030 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/proxy-client.key
I1002 20:59:15.256440 2803030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/proxy-client.crt with IP's: []
I1002 20:59:15.335793 2803030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/proxy-client.crt ...
I1002 20:59:15.335812 2803030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/proxy-client.crt: {Name:mkf4be235f119f719985f10d0eb72a856088bea2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:59:15.336024 2803030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/proxy-client.key ...
I1002 20:59:15.336031 2803030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/proxy-client.key: {Name:mkd6f13fd7656022f08234460605016139c2114e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:59:15.336220 2803030 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca-key.pem (1679 bytes)
I1002 20:59:15.336269 2803030 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem (1078 bytes)
I1002 20:59:15.336294 2803030 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/cert.pem (1123 bytes)
I1002 20:59:15.336315 2803030 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/key.pem (1675 bytes)
I1002 20:59:15.336894 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1002 20:59:15.355548 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1002 20:59:15.372993 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1002 20:59:15.390743 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1002 20:59:15.408089 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1002 20:59:15.424870 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1002 20:59:15.441897 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1002 20:59:15.459561 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1002 20:59:15.476637 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1002 20:59:15.494187 2803030 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1002 20:59:15.506465 2803030 ssh_runner.go:195] Run: openssl version
I1002 20:59:15.515752 2803030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1002 20:59:15.524954 2803030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1002 20:59:15.528544 2803030 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 2 20:53 /usr/share/ca-certificates/minikubeCA.pem
I1002 20:59:15.528597 2803030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1002 20:59:15.570682 2803030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1002 20:59:15.579387 2803030 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1002 20:59:15.583698 2803030 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1002 20:59:15.583750 2803030 kubeadm.go:400] StartCluster: {Name:dockerenv-775346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-775346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1002 20:59:15.583821 2803030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1002 20:59:15.583890 2803030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1002 20:59:15.611249 2803030 cri.go:89] found id: ""
I1002 20:59:15.611340 2803030 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1002 20:59:15.619060 2803030 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1002 20:59:15.626716 2803030 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1002 20:59:15.626768 2803030 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1002 20:59:15.634435 2803030 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1002 20:59:15.634452 2803030 kubeadm.go:157] found existing configuration files:
I1002 20:59:15.634503 2803030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1002 20:59:15.642055 2803030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1002 20:59:15.642115 2803030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1002 20:59:15.649437 2803030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1002 20:59:15.656723 2803030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1002 20:59:15.656794 2803030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1002 20:59:15.664171 2803030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1002 20:59:15.671623 2803030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1002 20:59:15.671677 2803030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1002 20:59:15.678703 2803030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1002 20:59:15.686065 2803030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1002 20:59:15.686135 2803030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1002 20:59:15.693340 2803030 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1002 20:59:15.755497 2803030 kubeadm.go:318] [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
I1002 20:59:15.755763 2803030 kubeadm.go:318] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1002 20:59:15.822683 2803030 kubeadm.go:318] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1002 20:59:30.890674 2803030 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1002 20:59:30.890724 2803030 kubeadm.go:318] [preflight] Running pre-flight checks
I1002 20:59:30.890823 2803030 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
I1002 20:59:30.890880 2803030 kubeadm.go:318] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1002 20:59:30.890915 2803030 kubeadm.go:318] [0;37mOS[0m: [0;32mLinux[0m
I1002 20:59:30.890961 2803030 kubeadm.go:318] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1002 20:59:30.891010 2803030 kubeadm.go:318] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1002 20:59:30.891058 2803030 kubeadm.go:318] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1002 20:59:30.891109 2803030 kubeadm.go:318] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1002 20:59:30.891158 2803030 kubeadm.go:318] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1002 20:59:30.891208 2803030 kubeadm.go:318] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1002 20:59:30.891255 2803030 kubeadm.go:318] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1002 20:59:30.891313 2803030 kubeadm.go:318] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1002 20:59:30.891371 2803030 kubeadm.go:318] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1002 20:59:30.891445 2803030 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1002 20:59:30.891542 2803030 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1002 20:59:30.891634 2803030 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1002 20:59:30.891697 2803030 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1002 20:59:30.894497 2803030 out.go:252] - Generating certificates and keys ...
I1002 20:59:30.894596 2803030 kubeadm.go:318] [certs] Using existing ca certificate authority
I1002 20:59:30.894666 2803030 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1002 20:59:30.894740 2803030 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
I1002 20:59:30.894798 2803030 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
I1002 20:59:30.894860 2803030 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
I1002 20:59:30.894913 2803030 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
I1002 20:59:30.894969 2803030 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
I1002 20:59:30.895093 2803030 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [dockerenv-775346 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1002 20:59:30.895146 2803030 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
I1002 20:59:30.895268 2803030 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-775346 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1002 20:59:30.895356 2803030 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
I1002 20:59:30.895422 2803030 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
I1002 20:59:30.895467 2803030 kubeadm.go:318] [certs] Generating "sa" key and public key
I1002 20:59:30.895527 2803030 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1002 20:59:30.895579 2803030 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1002 20:59:30.895637 2803030 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1002 20:59:30.895692 2803030 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1002 20:59:30.895757 2803030 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1002 20:59:30.895813 2803030 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1002 20:59:30.895897 2803030 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1002 20:59:30.895965 2803030 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1002 20:59:30.898965 2803030 out.go:252] - Booting up control plane ...
I1002 20:59:30.899062 2803030 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1002 20:59:30.899165 2803030 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1002 20:59:30.899244 2803030 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1002 20:59:30.899412 2803030 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1002 20:59:30.899510 2803030 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1002 20:59:30.899618 2803030 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1002 20:59:30.899713 2803030 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1002 20:59:30.899753 2803030 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1002 20:59:30.899913 2803030 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1002 20:59:30.900033 2803030 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1002 20:59:30.900096 2803030 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.647584ms
I1002 20:59:30.900194 2803030 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1002 20:59:30.900277 2803030 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
I1002 20:59:30.900369 2803030 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1002 20:59:30.900452 2803030 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1002 20:59:30.900531 2803030 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.832695021s
I1002 20:59:30.900600 2803030 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.216675389s
I1002 20:59:30.900670 2803030 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501951443s
I1002 20:59:30.900782 2803030 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1002 20:59:30.900911 2803030 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1002 20:59:30.900979 2803030 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
I1002 20:59:30.901178 2803030 kubeadm.go:318] [mark-control-plane] Marking the node dockerenv-775346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1002 20:59:30.901235 2803030 kubeadm.go:318] [bootstrap-token] Using token: cu7jdy.ihu3q1moz9w9prz3
I1002 20:59:30.904064 2803030 out.go:252] - Configuring RBAC rules ...
I1002 20:59:30.904182 2803030 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1002 20:59:30.904269 2803030 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1002 20:59:30.904443 2803030 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1002 20:59:30.904593 2803030 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1002 20:59:30.904715 2803030 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1002 20:59:30.904804 2803030 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1002 20:59:30.904923 2803030 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1002 20:59:30.904968 2803030 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
I1002 20:59:30.905015 2803030 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
I1002 20:59:30.905019 2803030 kubeadm.go:318]
I1002 20:59:30.905085 2803030 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
I1002 20:59:30.905088 2803030 kubeadm.go:318]
I1002 20:59:30.905168 2803030 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
I1002 20:59:30.905171 2803030 kubeadm.go:318]
I1002 20:59:30.905196 2803030 kubeadm.go:318] mkdir -p $HOME/.kube
I1002 20:59:30.905257 2803030 kubeadm.go:318] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1002 20:59:30.905309 2803030 kubeadm.go:318] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1002 20:59:30.905312 2803030 kubeadm.go:318]
I1002 20:59:30.905368 2803030 kubeadm.go:318] Alternatively, if you are the root user, you can run:
I1002 20:59:30.905371 2803030 kubeadm.go:318]
I1002 20:59:30.905420 2803030 kubeadm.go:318] export KUBECONFIG=/etc/kubernetes/admin.conf
I1002 20:59:30.905423 2803030 kubeadm.go:318]
I1002 20:59:30.905476 2803030 kubeadm.go:318] You should now deploy a pod network to the cluster.
I1002 20:59:30.905554 2803030 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1002 20:59:30.905624 2803030 kubeadm.go:318] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1002 20:59:30.905627 2803030 kubeadm.go:318]
I1002 20:59:30.905727 2803030 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
I1002 20:59:30.905807 2803030 kubeadm.go:318] and service account keys on each node and then running the following as root:
I1002 20:59:30.905810 2803030 kubeadm.go:318]
I1002 20:59:30.905898 2803030 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token cu7jdy.ihu3q1moz9w9prz3 \
I1002 20:59:30.906004 2803030 kubeadm.go:318] --discovery-token-ca-cert-hash sha256:1398f01722b622f845548c7ec65fd7116bf0d2b59eb2ba444bbb109867d41495 \
I1002 20:59:30.906024 2803030 kubeadm.go:318] --control-plane
I1002 20:59:30.906028 2803030 kubeadm.go:318]
I1002 20:59:30.906116 2803030 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
I1002 20:59:30.906119 2803030 kubeadm.go:318]
I1002 20:59:30.906204 2803030 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token cu7jdy.ihu3q1moz9w9prz3 \
I1002 20:59:30.906324 2803030 kubeadm.go:318] --discovery-token-ca-cert-hash sha256:1398f01722b622f845548c7ec65fd7116bf0d2b59eb2ba444bbb109867d41495
I1002 20:59:30.906331 2803030 cni.go:84] Creating CNI manager for ""
I1002 20:59:30.906336 2803030 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1002 20:59:30.911118 2803030 out.go:179] * Configuring CNI (Container Networking Interface) ...
I1002 20:59:30.913956 2803030 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1002 20:59:30.918708 2803030 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
I1002 20:59:30.918718 2803030 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
I1002 20:59:30.933013 2803030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1002 20:59:31.244338 2803030 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1002 20:59:31.244501 2803030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1002 20:59:31.244580 2803030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes dockerenv-775346 minikube.k8s.io/updated_at=2025_10_02T20_59_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=dockerenv-775346 minikube.k8s.io/primary=true
I1002 20:59:31.355341 2803030 kubeadm.go:1113] duration metric: took 110.896881ms to wait for elevateKubeSystemPrivileges
I1002 20:59:31.355361 2803030 ops.go:34] apiserver oom_adj: -16
I1002 20:59:31.427980 2803030 kubeadm.go:402] duration metric: took 15.844227533s to StartCluster
I1002 20:59:31.428004 2803030 settings.go:142] acquiring lock: {Name:mke92114e22bdbcff74119665eced9d6b9ac1b1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:59:31.428077 2803030 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21682-2783765/kubeconfig
I1002 20:59:31.428722 2803030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/kubeconfig: {Name:mkcf76851e68b723b0046b589af4cfa7ca9a3bdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:59:31.428964 2803030 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1002 20:59:31.429084 2803030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1002 20:59:31.429314 2803030 config.go:182] Loaded profile config "dockerenv-775346": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 20:59:31.429343 2803030 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1002 20:59:31.429401 2803030 addons.go:69] Setting storage-provisioner=true in profile "dockerenv-775346"
I1002 20:59:31.429413 2803030 addons.go:238] Setting addon storage-provisioner=true in "dockerenv-775346"
I1002 20:59:31.429433 2803030 host.go:66] Checking if "dockerenv-775346" exists ...
I1002 20:59:31.429947 2803030 cli_runner.go:164] Run: docker container inspect dockerenv-775346 --format={{.State.Status}}
I1002 20:59:31.430220 2803030 addons.go:69] Setting default-storageclass=true in profile "dockerenv-775346"
I1002 20:59:31.430243 2803030 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-775346"
I1002 20:59:31.430526 2803030 cli_runner.go:164] Run: docker container inspect dockerenv-775346 --format={{.State.Status}}
I1002 20:59:31.434700 2803030 out.go:179] * Verifying Kubernetes components...
I1002 20:59:31.437985 2803030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1002 20:59:31.477569 2803030 addons.go:238] Setting addon default-storageclass=true in "dockerenv-775346"
I1002 20:59:31.477600 2803030 host.go:66] Checking if "dockerenv-775346" exists ...
I1002 20:59:31.478053 2803030 cli_runner.go:164] Run: docker container inspect dockerenv-775346 --format={{.State.Status}}
I1002 20:59:31.479510 2803030 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1002 20:59:31.482387 2803030 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1002 20:59:31.482397 2803030 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1002 20:59:31.482466 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
I1002 20:59:31.502300 2803030 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I1002 20:59:31.502313 2803030 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1002 20:59:31.502379 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
I1002 20:59:31.536824 2803030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36117 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa Username:docker}
I1002 20:59:31.549570 2803030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36117 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa Username:docker}
I1002 20:59:31.742953 2803030 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1002 20:59:31.742981 2803030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1002 20:59:31.809942 2803030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1002 20:59:31.846392 2803030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1002 20:59:32.125089 2803030 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I1002 20:59:32.126803 2803030 api_server.go:52] waiting for apiserver process to appear ...
I1002 20:59:32.126849 2803030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1002 20:59:32.347341 2803030 api_server.go:72] duration metric: took 918.35105ms to wait for apiserver process to appear ...
I1002 20:59:32.347356 2803030 api_server.go:88] waiting for apiserver healthz status ...
I1002 20:59:32.347373 2803030 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I1002 20:59:32.350371 2803030 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
I1002 20:59:32.354033 2803030 addons.go:514] duration metric: took 924.666456ms for enable addons: enabled=[default-storageclass storage-provisioner]
I1002 20:59:32.359337 2803030 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I1002 20:59:32.361453 2803030 api_server.go:141] control plane version: v1.34.1
I1002 20:59:32.361475 2803030 api_server.go:131] duration metric: took 14.108599ms to wait for apiserver health ...
I1002 20:59:32.361482 2803030 system_pods.go:43] waiting for kube-system pods to appear ...
I1002 20:59:32.364202 2803030 system_pods.go:59] 5 kube-system pods found
I1002 20:59:32.364223 2803030 system_pods.go:61] "etcd-dockerenv-775346" [07ec3103-288b-4541-9903-1b9dd312f03c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1002 20:59:32.364232 2803030 system_pods.go:61] "kube-apiserver-dockerenv-775346" [13a48838-39ec-48aa-a374-0fc832283591] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1002 20:59:32.364241 2803030 system_pods.go:61] "kube-controller-manager-dockerenv-775346" [e8ac73ad-99fb-4e07-a59a-bfec06860633] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1002 20:59:32.364248 2803030 system_pods.go:61] "kube-scheduler-dockerenv-775346" [db2548ab-6e65-4cb0-9a3a-19f7394ad0dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1002 20:59:32.364253 2803030 system_pods.go:61] "storage-provisioner" [e4443f71-fb3d-4898-964a-d5fc6ec97c63] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
I1002 20:59:32.364257 2803030 system_pods.go:74] duration metric: took 2.771403ms to wait for pod list to return data ...
I1002 20:59:32.364267 2803030 kubeadm.go:586] duration metric: took 935.28309ms to wait for: map[apiserver:true system_pods:true]
I1002 20:59:32.364278 2803030 node_conditions.go:102] verifying NodePressure condition ...
I1002 20:59:32.366930 2803030 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I1002 20:59:32.366949 2803030 node_conditions.go:123] node cpu capacity is 2
I1002 20:59:32.366960 2803030 node_conditions.go:105] duration metric: took 2.678782ms to run NodePressure ...
I1002 20:59:32.366984 2803030 start.go:241] waiting for startup goroutines ...
I1002 20:59:32.629510 2803030 kapi.go:214] "coredns" deployment in "kube-system" namespace and "dockerenv-775346" context rescaled to 1 replicas
I1002 20:59:32.629538 2803030 start.go:246] waiting for cluster config update ...
I1002 20:59:32.629549 2803030 start.go:255] writing updated cluster config ...
I1002 20:59:32.629843 2803030 ssh_runner.go:195] Run: rm -f paused
I1002 20:59:32.686217 2803030 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
I1002 20:59:32.689614 2803030 out.go:179] * Done! kubectl is now configured to use "dockerenv-775346" cluster and "default" namespace by default
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
e9a32d4528330 b1a8c6f707935 10 seconds ago Running kindnet-cni 0 db9341869a0c2 kindnet-th5cx kube-system
87c5429d2fc8c 05baa95f5142d 11 seconds ago Running kube-proxy 0 f407d899871bb kube-proxy-x2btr kube-system
48296eb5b38fb b5f57ec6b9867 23 seconds ago Running kube-scheduler 0 1c10caff54b0f kube-scheduler-dockerenv-775346 kube-system
c87a17493be59 7eb2c6ff0c5a7 23 seconds ago Running kube-controller-manager 0 d0bfbff314a60 kube-controller-manager-dockerenv-775346 kube-system
5fb27d2868430 43911e833d64d 23 seconds ago Running kube-apiserver 0 3c6c117d46a86 kube-apiserver-dockerenv-775346 kube-system
8f0f173c3c0b1 a1894772a478e 23 seconds ago Running etcd 0 81dd7a872e437 etcd-dockerenv-775346 kube-system
==> containerd <==
Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.566281821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-dockerenv-775346,Uid:1d05ec45a3a2d828892c1421eb6b78da,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0bfbff314a601df9a07287dcee6c82ecc775f43251b07ee9703e489e75348fa\""
Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.571995593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-dockerenv-775346,Uid:6496bc09232df9221ccdec1baf7dafb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c10caff54b0f1e858f61e7c15a2a5ffe9a2fef8f07b1d4c036c2fd3fed065fb\""
Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.574030762Z" level=info msg="CreateContainer within sandbox \"d0bfbff314a601df9a07287dcee6c82ecc775f43251b07ee9703e489e75348fa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.578761035Z" level=info msg="CreateContainer within sandbox \"1c10caff54b0f1e858f61e7c15a2a5ffe9a2fef8f07b1d4c036c2fd3fed065fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.607946702Z" level=info msg="CreateContainer within sandbox \"d0bfbff314a601df9a07287dcee6c82ecc775f43251b07ee9703e489e75348fa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c87a17493be5940997552f9e598dd3a1a99851d77385206825bccfc423a4e97e\""
Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.608137204Z" level=info msg="StartContainer for \"8f0f173c3c0b15977361c646e8f1ec54ebbfe51e58eaa06b948b5613c3ef1870\" returns successfully"
Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.609094406Z" level=info msg="StartContainer for \"c87a17493be5940997552f9e598dd3a1a99851d77385206825bccfc423a4e97e\""
Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.632745381Z" level=info msg="CreateContainer within sandbox \"1c10caff54b0f1e858f61e7c15a2a5ffe9a2fef8f07b1d4c036c2fd3fed065fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"48296eb5b38fbfa582596583d745214e9734a1b113e60bde5b5377e4418aaafe\""
Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.633444560Z" level=info msg="StartContainer for \"48296eb5b38fbfa582596583d745214e9734a1b113e60bde5b5377e4418aaafe\""
Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.667572114Z" level=info msg="StartContainer for \"5fb27d28684302e9bdd3e507c5571ab54f0ca6d2eafc095348f4186270ef6dd0\" returns successfully"
Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.741254801Z" level=info msg="StartContainer for \"48296eb5b38fbfa582596583d745214e9734a1b113e60bde5b5377e4418aaafe\" returns successfully"
Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.791484589Z" level=info msg="StartContainer for \"c87a17493be5940997552f9e598dd3a1a99851d77385206825bccfc423a4e97e\" returns successfully"
Oct 02 20:59:34 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:34.630386643Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Oct 02 20:59:35 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:35.955950140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-th5cx,Uid:87861e3b-1048-4406-9d4f-7b1278cfbed8,Namespace:kube-system,Attempt:0,}"
Oct 02 20:59:35 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:35.982175132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x2btr,Uid:5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9,Namespace:kube-system,Attempt:0,}"
Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.061423714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x2btr,Uid:5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f407d899871bbee38d539a0bbeba66b08a42ef6a647911bf50d4dd09dc298a9f\""
Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.073894255Z" level=info msg="CreateContainer within sandbox \"f407d899871bbee38d539a0bbeba66b08a42ef6a647911bf50d4dd09dc298a9f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.093271219Z" level=info msg="CreateContainer within sandbox \"f407d899871bbee38d539a0bbeba66b08a42ef6a647911bf50d4dd09dc298a9f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"87c5429d2fc8c3ccf54a6a8915c0a9c0b9c5239ca9ceaf19028b770515a2dc02\""
Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.096393981Z" level=info msg="StartContainer for \"87c5429d2fc8c3ccf54a6a8915c0a9c0b9c5239ca9ceaf19028b770515a2dc02\""
Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.118149948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-th5cx,Uid:87861e3b-1048-4406-9d4f-7b1278cfbed8,Namespace:kube-system,Attempt:0,} returns sandbox id \"db9341869a0c2fa62347e8966d8c3b4b08fdc2cd35ba2976aa9042dc7195fcfa\""
Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.130209922Z" level=info msg="CreateContainer within sandbox \"db9341869a0c2fa62347e8966d8c3b4b08fdc2cd35ba2976aa9042dc7195fcfa\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.199434504Z" level=info msg="CreateContainer within sandbox \"db9341869a0c2fa62347e8966d8c3b4b08fdc2cd35ba2976aa9042dc7195fcfa\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"e9a32d452833036c376ed3c93cea6fcec3b9df10205045f693b010fd16ff833c\""
Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.220131270Z" level=info msg="StartContainer for \"e9a32d452833036c376ed3c93cea6fcec3b9df10205045f693b010fd16ff833c\""
Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.284092754Z" level=info msg="StartContainer for \"87c5429d2fc8c3ccf54a6a8915c0a9c0b9c5239ca9ceaf19028b770515a2dc02\" returns successfully"
Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.318318746Z" level=info msg="StartContainer for \"e9a32d452833036c376ed3c93cea6fcec3b9df10205045f693b010fd16ff833c\" returns successfully"
==> describe nodes <==
Name: dockerenv-775346
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=dockerenv-775346
kubernetes.io/os=linux
minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
minikube.k8s.io/name=dockerenv-775346
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_10_02T20_59_31_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 02 Oct 2025 20:59:27 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: dockerenv-775346
AcquireTime: <unset>
RenewTime: Thu, 02 Oct 2025 20:59:40 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 02 Oct 2025 20:59:30 +0000 Thu, 02 Oct 2025 20:59:24 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Oct 2025 20:59:30 +0000 Thu, 02 Oct 2025 20:59:24 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Oct 2025 20:59:30 +0000 Thu, 02 Oct 2025 20:59:24 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Oct 2025 20:59:30 +0000 Thu, 02 Oct 2025 20:59:24 +0000 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
InternalIP: 192.168.49.2
Hostname: dockerenv-775346
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
System Info:
Machine ID: 2ed34a5d5acc4537a40f0df0203022d2
System UUID: c32f97eb-1f2d-4768-a3c0-484f67964f60
Boot ID: ddea27b5-1bb4-4ff4-b6ce-678e2308ca3c
Kernel Version: 5.15.0-1084-aws
OS Image: Debian GNU/Linux 12 (bookworm)
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.28
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-dockerenv-775346 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 17s
kube-system kindnet-th5cx 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 12s
kube-system kube-apiserver-dockerenv-775346 250m (12%) 0 (0%) 0 (0%) 0 (0%) 17s
kube-system kube-controller-manager-dockerenv-775346 200m (10%) 0 (0%) 0 (0%) 0 (0%) 17s
kube-system kube-proxy-x2btr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12s
kube-system kube-scheduler-dockerenv-775346 100m (5%) 0 (0%) 0 (0%) 0 (0%) 17s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%) 100m (5%)
memory 150Mi (1%) 50Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 10s kube-proxy
Normal NodeAllocatableEnforced 25s kubelet Updated Node Allocatable limit across pods
Normal Starting 25s kubelet Starting kubelet.
Warning CgroupV1 25s kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeHasSufficientMemory 24s (x8 over 25s) kubelet Node dockerenv-775346 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 24s (x7 over 25s) kubelet Node dockerenv-775346 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 24s (x8 over 25s) kubelet Node dockerenv-775346 status is now: NodeHasNoDiskPressure
Normal Starting 17s kubelet Starting kubelet.
Warning CgroupV1 17s kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeAllocatableEnforced 17s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 17s kubelet Node dockerenv-775346 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 17s kubelet Node dockerenv-775346 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 17s kubelet Node dockerenv-775346 status is now: NodeHasSufficientPID
Normal RegisteredNode 13s node-controller Node dockerenv-775346 event: Registered Node dockerenv-775346 in Controller
==> dmesg <==
[Oct 2 20:00] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
[Oct 2 20:51] kauditd_printk_skb: 8 callbacks suppressed
==> etcd [8f0f173c3c0b15977361c646e8f1ec54ebbfe51e58eaa06b948b5613c3ef1870] <==
{"level":"warn","ts":"2025-10-02T20:59:25.975891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59570","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:25.996239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59580","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.010290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59604","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.036021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59618","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.052094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59648","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.071829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59664","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.086936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59686","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.114570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59710","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.127387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59716","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.145248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59738","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.163877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59754","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.179323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59764","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.196430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59784","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.215114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59804","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.232874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59810","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.251741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59836","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.274562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59850","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.313201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59872","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.315898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59894","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.329129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59912","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.351911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59932","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.387471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59956","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.416016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59992","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.442040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59994","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T20:59:26.594230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60012","server-name":"","error":"EOF"}
==> kernel <==
20:59:47 up 16:42, 0 user, load average: 1.54, 2.33, 3.82
Linux dockerenv-775346 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kindnet [e9a32d452833036c376ed3c93cea6fcec3b9df10205045f693b010fd16ff833c] <==
I1002 20:59:36.492901 1 main.go:109] connected to apiserver: https://10.96.0.1:443
I1002 20:59:36.493329 1 main.go:139] hostIP = 192.168.49.2
podIP = 192.168.49.2
I1002 20:59:36.493557 1 main.go:148] setting mtu 1500 for CNI
I1002 20:59:36.493660 1 main.go:178] kindnetd IP family: "ipv4"
I1002 20:59:36.493761 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
time="2025-10-02T20:59:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
I1002 20:59:36.692507 1 controller.go:377] "Starting controller" name="kube-network-policies"
I1002 20:59:36.692590 1 controller.go:381] "Waiting for informer caches to sync"
I1002 20:59:36.692622 1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
I1002 20:59:36.692927 1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
==> kube-apiserver [5fb27d28684302e9bdd3e507c5571ab54f0ca6d2eafc095348f4186270ef6dd0] <==
E1002 20:59:27.745024 1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
I1002 20:59:27.796240 1 controller.go:667] quota admission added evaluator for: namespaces
I1002 20:59:27.813987 1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
I1002 20:59:27.814419 1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
E1002 20:59:27.824723 1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
I1002 20:59:27.842773 1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
I1002 20:59:27.849308 1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
I1002 20:59:27.931808 1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
I1002 20:59:28.395126 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I1002 20:59:28.400379 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I1002 20:59:28.400402 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1002 20:59:29.153335 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1002 20:59:29.211663 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1002 20:59:29.305957 1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
W1002 20:59:29.313126 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
I1002 20:59:29.314298 1 controller.go:667] quota admission added evaluator for: endpoints
I1002 20:59:29.319493 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1002 20:59:29.563640 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1002 20:59:30.301854 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1002 20:59:30.320794 1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
I1002 20:59:30.333587 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1002 20:59:35.016841 1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
I1002 20:59:35.312352 1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
I1002 20:59:35.319361 1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
I1002 20:59:35.361173 1 controller.go:667] quota admission added evaluator for: replicasets.apps
==> kube-controller-manager [c87a17493be5940997552f9e598dd3a1a99851d77385206825bccfc423a4e97e] <==
I1002 20:59:34.604102 1 shared_informer.go:356] "Caches are synced" controller="HPA"
I1002 20:59:34.604114 1 shared_informer.go:356] "Caches are synced" controller="disruption"
I1002 20:59:34.604356 1 shared_informer.go:356] "Caches are synced" controller="taint"
I1002 20:59:34.604366 1 shared_informer.go:356] "Caches are synced" controller="PV protection"
I1002 20:59:34.604522 1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I1002 20:59:34.604859 1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="dockerenv-775346"
I1002 20:59:34.605068 1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
I1002 20:59:34.604672 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1002 20:59:34.605658 1 shared_informer.go:356] "Caches are synced" controller="GC"
I1002 20:59:34.605830 1 shared_informer.go:356] "Caches are synced" controller="endpoint"
I1002 20:59:34.607355 1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
I1002 20:59:34.609225 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1002 20:59:34.609443 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1002 20:59:34.609538 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I1002 20:59:34.610387 1 shared_informer.go:356] "Caches are synced" controller="TTL"
I1002 20:59:34.612406 1 shared_informer.go:356] "Caches are synced" controller="namespace"
I1002 20:59:34.612583 1 shared_informer.go:356] "Caches are synced" controller="node"
I1002 20:59:34.612737 1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
I1002 20:59:34.612901 1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
I1002 20:59:34.613035 1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
I1002 20:59:34.613146 1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
I1002 20:59:34.614922 1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
I1002 20:59:34.621006 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1002 20:59:34.623314 1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
I1002 20:59:34.624137 1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="dockerenv-775346" podCIDRs=["10.244.0.0/24"]
==> kube-proxy [87c5429d2fc8c3ccf54a6a8915c0a9c0b9c5239ca9ceaf19028b770515a2dc02] <==
I1002 20:59:36.344190 1 server_linux.go:53] "Using iptables proxy"
I1002 20:59:36.452973 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1002 20:59:36.560385 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1002 20:59:36.560603 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1002 20:59:36.560713 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1002 20:59:36.579743 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1002 20:59:36.579982 1 server_linux.go:132] "Using iptables Proxier"
I1002 20:59:36.586002 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1002 20:59:36.586509 1 server.go:527] "Version info" version="v1.34.1"
I1002 20:59:36.586805 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1002 20:59:36.589599 1 config.go:200] "Starting service config controller"
I1002 20:59:36.589933 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1002 20:59:36.590072 1 config.go:106] "Starting endpoint slice config controller"
I1002 20:59:36.590156 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1002 20:59:36.590312 1 config.go:403] "Starting serviceCIDR config controller"
I1002 20:59:36.591094 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1002 20:59:36.595634 1 config.go:309] "Starting node config controller"
I1002 20:59:36.595802 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1002 20:59:36.595883 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1002 20:59:36.690624 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1002 20:59:36.690828 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1002 20:59:36.691226 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-scheduler [48296eb5b38fbfa582596583d745214e9734a1b113e60bde5b5377e4418aaafe] <==
I1002 20:59:28.174677 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1002 20:59:28.178815 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1002 20:59:28.178856 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1002 20:59:28.179834 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1002 20:59:28.180222 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
E1002 20:59:28.188995 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
E1002 20:59:28.193355 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1002 20:59:28.193510 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1002 20:59:28.193645 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1002 20:59:28.193685 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1002 20:59:28.193723 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1002 20:59:28.195660 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1002 20:59:28.195734 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1002 20:59:28.196012 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1002 20:59:28.196131 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1002 20:59:28.196793 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1002 20:59:28.201368 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1002 20:59:28.201589 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1002 20:59:28.201764 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1002 20:59:28.201935 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1002 20:59:28.202829 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1002 20:59:28.203490 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1002 20:59:28.203580 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1002 20:59:28.203729 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
I1002 20:59:29.379249 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Oct 02 20:59:31 dockerenv-775346 kubelet[1456]: I1002 20:59:31.381019 1456 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-dockerenv-775346"
Oct 02 20:59:31 dockerenv-775346 kubelet[1456]: E1002 20:59:31.399512 1456 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-dockerenv-775346\" already exists" pod="kube-system/kube-apiserver-dockerenv-775346"
Oct 02 20:59:31 dockerenv-775346 kubelet[1456]: I1002 20:59:31.412370 1456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-dockerenv-775346" podStartSLOduration=1.412352563 podStartE2EDuration="1.412352563s" podCreationTimestamp="2025-10-02 20:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:59:31.412159952 +0000 UTC m=+1.270369068" watchObservedRunningTime="2025-10-02 20:59:31.412352563 +0000 UTC m=+1.270561687"
Oct 02 20:59:31 dockerenv-775346 kubelet[1456]: I1002 20:59:31.468909 1456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-dockerenv-775346" podStartSLOduration=1.468890142 podStartE2EDuration="1.468890142s" podCreationTimestamp="2025-10-02 20:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:59:31.435466117 +0000 UTC m=+1.293675241" watchObservedRunningTime="2025-10-02 20:59:31.468890142 +0000 UTC m=+1.327099258"
Oct 02 20:59:31 dockerenv-775346 kubelet[1456]: I1002 20:59:31.528045 1456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-dockerenv-775346" podStartSLOduration=1.5280260669999999 podStartE2EDuration="1.528026067s" podCreationTimestamp="2025-10-02 20:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:59:31.482620257 +0000 UTC m=+1.340829373" watchObservedRunningTime="2025-10-02 20:59:31.528026067 +0000 UTC m=+1.386235192"
Oct 02 20:59:31 dockerenv-775346 kubelet[1456]: I1002 20:59:31.529773 1456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-dockerenv-775346" podStartSLOduration=1.5297338219999999 podStartE2EDuration="1.529733822s" podCreationTimestamp="2025-10-02 20:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:59:31.527982982 +0000 UTC m=+1.386192106" watchObservedRunningTime="2025-10-02 20:59:31.529733822 +0000 UTC m=+1.387942995"
Oct 02 20:59:34 dockerenv-775346 kubelet[1456]: I1002 20:59:34.630015 1456 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Oct 02 20:59:34 dockerenv-775346 kubelet[1456]: I1002 20:59:34.630616 1456 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.109730 1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcsrl\" (UniqueName: \"kubernetes.io/projected/87861e3b-1048-4406-9d4f-7b1278cfbed8-kube-api-access-tcsrl\") pod \"kindnet-th5cx\" (UID: \"87861e3b-1048-4406-9d4f-7b1278cfbed8\") " pod="kube-system/kindnet-th5cx"
Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.109786 1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9-lib-modules\") pod \"kube-proxy-x2btr\" (UID: \"5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9\") " pod="kube-system/kube-proxy-x2btr"
Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.109815 1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/87861e3b-1048-4406-9d4f-7b1278cfbed8-cni-cfg\") pod \"kindnet-th5cx\" (UID: \"87861e3b-1048-4406-9d4f-7b1278cfbed8\") " pod="kube-system/kindnet-th5cx"
Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.109833 1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87861e3b-1048-4406-9d4f-7b1278cfbed8-xtables-lock\") pod \"kindnet-th5cx\" (UID: \"87861e3b-1048-4406-9d4f-7b1278cfbed8\") " pod="kube-system/kindnet-th5cx"
Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.109850 1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87861e3b-1048-4406-9d4f-7b1278cfbed8-lib-modules\") pod \"kindnet-th5cx\" (UID: \"87861e3b-1048-4406-9d4f-7b1278cfbed8\") " pod="kube-system/kindnet-th5cx"
Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.109870 1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tknr\" (UniqueName: \"kubernetes.io/projected/5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9-kube-api-access-5tknr\") pod \"kube-proxy-x2btr\" (UID: \"5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9\") " pod="kube-system/kube-proxy-x2btr"
Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.109891 1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9-kube-proxy\") pod \"kube-proxy-x2btr\" (UID: \"5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9\") " pod="kube-system/kube-proxy-x2btr"
Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.109915 1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9-xtables-lock\") pod \"kube-proxy-x2btr\" (UID: \"5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9\") " pod="kube-system/kube-proxy-x2btr"
Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: E1002 20:59:35.222573 1456 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: E1002 20:59:35.222612 1456 projected.go:196] Error preparing data for projected volume kube-api-access-tcsrl for pod kube-system/kindnet-th5cx: configmap "kube-root-ca.crt" not found
Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: E1002 20:59:35.222690 1456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87861e3b-1048-4406-9d4f-7b1278cfbed8-kube-api-access-tcsrl podName:87861e3b-1048-4406-9d4f-7b1278cfbed8 nodeName:}" failed. No retries permitted until 2025-10-02 20:59:35.722666171 +0000 UTC m=+5.580875287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tcsrl" (UniqueName: "kubernetes.io/projected/87861e3b-1048-4406-9d4f-7b1278cfbed8-kube-api-access-tcsrl") pod "kindnet-th5cx" (UID: "87861e3b-1048-4406-9d4f-7b1278cfbed8") : configmap "kube-root-ca.crt" not found
Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: E1002 20:59:35.226692 1456 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: E1002 20:59:35.226728 1456 projected.go:196] Error preparing data for projected volume kube-api-access-5tknr for pod kube-system/kube-proxy-x2btr: configmap "kube-root-ca.crt" not found
Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: E1002 20:59:35.226792 1456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9-kube-api-access-5tknr podName:5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9 nodeName:}" failed. No retries permitted until 2025-10-02 20:59:35.726769725 +0000 UTC m=+5.584978849 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5tknr" (UniqueName: "kubernetes.io/projected/5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9-kube-api-access-5tknr") pod "kube-proxy-x2btr" (UID: "5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9") : configmap "kube-root-ca.crt" not found
Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.815702 1456 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
Oct 02 20:59:36 dockerenv-775346 kubelet[1456]: I1002 20:59:36.429801 1456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x2btr" podStartSLOduration=1.429782708 podStartE2EDuration="1.429782708s" podCreationTimestamp="2025-10-02 20:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:59:36.406256626 +0000 UTC m=+6.264465758" watchObservedRunningTime="2025-10-02 20:59:36.429782708 +0000 UTC m=+6.287991832"
Oct 02 20:59:36 dockerenv-775346 kubelet[1456]: I1002 20:59:36.883185 1456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-th5cx" podStartSLOduration=1.883155544 podStartE2EDuration="1.883155544s" podCreationTimestamp="2025-10-02 20:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:59:36.430458666 +0000 UTC m=+6.288667790" watchObservedRunningTime="2025-10-02 20:59:36.883155544 +0000 UTC m=+6.741364684"
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p dockerenv-775346 -n dockerenv-775346
helpers_test.go:269: (dbg) Run: kubectl --context dockerenv-775346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-rmx99 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context dockerenv-775346 describe pod coredns-66bc5c9577-rmx99 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context dockerenv-775346 describe pod coredns-66bc5c9577-rmx99 storage-provisioner: exit status 1 (85.133645ms)
** stderr **
Error from server (NotFound): pods "coredns-66bc5c9577-rmx99" not found
Error from server (NotFound): pods "storage-provisioner" not found
** /stderr **
helpers_test.go:287: kubectl --context dockerenv-775346 describe pod coredns-66bc5c9577-rmx99 storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "dockerenv-775346" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-arm64 delete -p dockerenv-775346
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-775346: (2.268567365s)
--- FAIL: TestDockerEnvContainerd (48.81s)