=== RUN TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2x25h" [834daeba-b747-4a9b-92f8-8b6002a56239] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-552634 -n old-k8s-version-552634
start_stop_delete_test.go:272: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-08 13:27:50.21233544 +0000 UTC m=+3300.547870521
start_stop_delete_test.go:272: (dbg) Run: kubectl --context old-k8s-version-552634 describe po kubernetes-dashboard-8694d4445c-2x25h -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context old-k8s-version-552634 describe po kubernetes-dashboard-8694d4445c-2x25h -n kubernetes-dashboard:
Name: kubernetes-dashboard-8694d4445c-2x25h
Namespace: kubernetes-dashboard
Priority: 0
Service Account: kubernetes-dashboard
Node: old-k8s-version-552634/192.168.76.2
Start Time: Mon, 08 Sep 2025 13:18:26 +0000
Labels: gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations: <none>
Status: Pending
IP: 10.244.0.6
IPs:
IP: 10.244.0.6
Controlled By: ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:
Image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:
Port: 9090/TCP
Host Port: 0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Liveness: http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-94n7n (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-94n7n:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m24s default-scheduler Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h to old-k8s-version-552634
Normal Pulling 7m44s (x4 over 9m23s) kubelet Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning Failed 7m43s (x4 over 9m18s) kubelet Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 7m43s (x4 over 9m18s) kubelet Error: ErrImagePull
Warning Failed 7m30s (x6 over 9m17s) kubelet Error: ImagePullBackOff
Normal BackOff 4m14s (x20 over 9m17s) kubelet Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run: kubectl --context old-k8s-version-552634 logs kubernetes-dashboard-8694d4445c-2x25h -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context old-k8s-version-552634 logs kubernetes-dashboard-8694d4445c-2x25h -n kubernetes-dashboard: exit status 1 (132.241411ms)
** stderr **
Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-2x25h" is waiting to start: trying and failing to pull image
** /stderr **
start_stop_delete_test.go:272: kubectl --context old-k8s-version-552634 logs kubernetes-dashboard-8694d4445c-2x25h -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect old-k8s-version-552634
helpers_test.go:243: (dbg) docker inspect old-k8s-version-552634:
-- stdout --
[
{
"Id": "35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5",
"Created": "2025-09-08T13:16:31.223099842Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 2954522,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-09-08T13:18:01.325654388Z",
"FinishedAt": "2025-09-08T13:18:00.352577855Z"
},
"Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
"ResolvConfPath": "/var/lib/docker/containers/35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5/hostname",
"HostsPath": "/var/lib/docker/containers/35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5/hosts",
"LogPath": "/var/lib/docker/containers/35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5/35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5-json.log",
"Name": "/old-k8s-version-552634",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-552634:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-552634",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5",
"LowerDir": "/var/lib/docker/overlay2/423c34de6203abe7390e0ed2ae24e951b4c28c72668f1d0a19312091aedfbdf5-init/diff:/var/lib/docker/overlay2/665ff8bb3d07b8342629df037737f3667c7c59d9d1f85930dc3dfdf138460626/diff",
"MergedDir": "/var/lib/docker/overlay2/423c34de6203abe7390e0ed2ae24e951b4c28c72668f1d0a19312091aedfbdf5/merged",
"UpperDir": "/var/lib/docker/overlay2/423c34de6203abe7390e0ed2ae24e951b4c28c72668f1d0a19312091aedfbdf5/diff",
"WorkDir": "/var/lib/docker/overlay2/423c34de6203abe7390e0ed2ae24e951b4c28c72668f1d0a19312091aedfbdf5/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-552634",
"Source": "/var/lib/docker/volumes/old-k8s-version-552634/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-552634",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-552634",
"name.minikube.sigs.k8s.io": "old-k8s-version-552634",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "1054ee47c483a1428606e5542d6cd92f987e38d7fe61c05d8f0b2f04b8c0d12a",
"SandboxKey": "/var/run/docker/netns/1054ee47c483",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36723"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36724"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36727"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36725"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36726"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-552634": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:bc:34:57:9c:19",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "31390a5008f77087656c6be660b8e10e100917da7426cbc36a65283e697b2fb6",
"EndpointID": "cb35276bdbd42ef51d3288d1ebaf44c4e25b96a9f437d50b2f35153c806c1498",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-552634",
"35ad4f8124aa"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-552634 -n old-k8s-version-552634
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-552634 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-552634 logs -n 25: (2.078355107s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs:
-- stdout --
==> Audit <==
┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
│ ssh │ force-systemd-env-386836 ssh cat /etc/containerd/config.toml │ force-systemd-env-386836 │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
│ delete │ -p force-systemd-env-386836 │ force-systemd-env-386836 │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
│ start │ -p cert-expiration-713579 --memory=3072 --cert-expiration=3m --driver=docker --container-runtime=containerd │ cert-expiration-713579 │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
│ start │ -p pause-864887 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd │ pause-864887 │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
│ pause │ -p pause-864887 --alsologtostderr -v=5 │ pause-864887 │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
│ unpause │ -p pause-864887 --alsologtostderr -v=5 │ pause-864887 │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
│ pause │ -p pause-864887 --alsologtostderr -v=5 │ pause-864887 │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
│ delete │ -p pause-864887 --alsologtostderr -v=5 │ pause-864887 │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
│ delete │ -p pause-864887 │ pause-864887 │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
│ start │ -p cert-options-480035 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --container-runtime=containerd │ cert-options-480035 │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:16 UTC │
│ ssh │ cert-options-480035 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt │ cert-options-480035 │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:16 UTC │
│ ssh │ -p cert-options-480035 -- sudo cat /etc/kubernetes/admin.conf │ cert-options-480035 │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:16 UTC │
│ delete │ -p cert-options-480035 │ cert-options-480035 │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:16 UTC │
│ start │ -p old-k8s-version-552634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-552634 │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:17 UTC │
│ addons │ enable metrics-server -p old-k8s-version-552634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-552634 │ jenkins │ v1.36.0 │ 08 Sep 25 13:17 UTC │ 08 Sep 25 13:17 UTC │
│ stop │ -p old-k8s-version-552634 --alsologtostderr -v=3 │ old-k8s-version-552634 │ jenkins │ v1.36.0 │ 08 Sep 25 13:17 UTC │ 08 Sep 25 13:18 UTC │
│ addons │ enable dashboard -p old-k8s-version-552634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ old-k8s-version-552634 │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:18 UTC │
│ start │ -p old-k8s-version-552634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-552634 │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:18 UTC │
│ start │ -p cert-expiration-713579 --memory=3072 --cert-expiration=8760h --driver=docker --container-runtime=containerd │ cert-expiration-713579 │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:18 UTC │
│ delete │ -p cert-expiration-713579 │ cert-expiration-713579 │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:19 UTC │
│ start │ -p no-preload-978911 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.34.0 │ no-preload-978911 │ jenkins │ v1.36.0 │ 08 Sep 25 13:19 UTC │ 08 Sep 25 13:20 UTC │
│ addons │ enable metrics-server -p no-preload-978911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ no-preload-978911 │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
│ stop │ -p no-preload-978911 --alsologtostderr -v=3 │ no-preload-978911 │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
│ addons │ enable dashboard -p no-preload-978911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ no-preload-978911 │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
│ start │ -p no-preload-978911 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.34.0 │ no-preload-978911 │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:21 UTC │
└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
==> Last Start <==
Log file created at: 2025/09/08 13:20:35
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.24.6 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0908 13:20:35.931338 2962976 out.go:360] Setting OutFile to fd 1 ...
I0908 13:20:35.931455 2962976 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:20:35.931470 2962976 out.go:374] Setting ErrFile to fd 2...
I0908 13:20:35.931478 2962976 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:20:35.931739 2962976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
I0908 13:20:35.932098 2962976 out.go:368] Setting JSON to false
I0908 13:20:35.933046 2962976 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":68588,"bootTime":1757269048,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I0908 13:20:35.933113 2962976 start.go:140] virtualization:
I0908 13:20:35.938091 2962976 out.go:179] * [no-preload-978911] minikube v1.36.0 on Ubuntu 20.04 (arm64)
I0908 13:20:35.941259 2962976 out.go:179] - MINIKUBE_LOCATION=21508
I0908 13:20:35.941303 2962976 notify.go:220] Checking for updates...
I0908 13:20:35.946964 2962976 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0908 13:20:35.949861 2962976 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
I0908 13:20:35.952715 2962976 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
I0908 13:20:35.956376 2962976 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I0908 13:20:35.959333 2962976 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I0908 13:20:35.962746 2962976 config.go:182] Loaded profile config "no-preload-978911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:20:35.963352 2962976 driver.go:421] Setting default libvirt URI to qemu:///system
I0908 13:20:35.991533 2962976 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
I0908 13:20:35.991638 2962976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0908 13:20:36.072292 2962976 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:20:36.052728775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0908 13:20:36.072406 2962976 docker.go:318] overlay module found
I0908 13:20:36.075656 2962976 out.go:179] * Using the docker driver based on existing profile
I0908 13:20:36.078527 2962976 start.go:304] selected driver: docker
I0908 13:20:36.078546 2962976 start.go:918] validating driver "docker" against &{Name:no-preload-978911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0908 13:20:36.078664 2962976 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0908 13:20:36.079452 2962976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0908 13:20:36.145451 2962976 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:20:36.13563539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0908 13:20:36.145819 2962976 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0908 13:20:36.145841 2962976 cni.go:84] Creating CNI manager for ""
I0908 13:20:36.145901 2962976 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0908 13:20:36.145935 2962976 start.go:348] cluster config:
{Name:no-preload-978911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0908 13:20:36.150798 2962976 out.go:179] * Starting "no-preload-978911" primary control-plane node in "no-preload-978911" cluster
I0908 13:20:36.153634 2962976 cache.go:123] Beginning downloading kic base image for docker with containerd
I0908 13:20:36.156700 2962976 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
I0908 13:20:36.159655 2962976 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
I0908 13:20:36.159871 2962976 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/config.json ...
I0908 13:20:36.160216 2962976 cache.go:107] acquiring lock: {Name:mk9f7cd9bf685dbdd22a939bba5743203e9424b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 13:20:36.160296 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0908 13:20:36.160304 2962976 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 94.209µs
I0908 13:20:36.160319 2962976 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0908 13:20:36.159703 2962976 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
I0908 13:20:36.160406 2962976 cache.go:107] acquiring lock: {Name:mka44a87e995f06fac0280236e9044a05cbf0c16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 13:20:36.160447 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0 exists
I0908 13:20:36.160453 2962976 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0" took 50.452µs
I0908 13:20:36.160460 2962976 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
I0908 13:20:36.160483 2962976 cache.go:107] acquiring lock: {Name:mkb0b6bdd176d599d5a383a38a60d5e44912d326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 13:20:36.160512 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0 exists
I0908 13:20:36.160517 2962976 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0" took 35.248µs
I0908 13:20:36.160522 2962976 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
I0908 13:20:36.160531 2962976 cache.go:107] acquiring lock: {Name:mk3a1846ff1d17320a61c4f0cd7f03a465580c64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 13:20:36.160557 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
I0908 13:20:36.160562 2962976 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0" took 31.646µs
I0908 13:20:36.160568 2962976 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
I0908 13:20:36.160576 2962976 cache.go:107] acquiring lock: {Name:mk1c90c15fea0bf3c7271fb14d259c914df38d83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 13:20:36.160600 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0 exists
I0908 13:20:36.160605 2962976 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0" took 29.727µs
I0908 13:20:36.160612 2962976 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
I0908 13:20:36.160622 2962976 cache.go:107] acquiring lock: {Name:mke7032f762990626a62b2503bb54454bb8e4428 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 13:20:36.160650 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
I0908 13:20:36.160655 2962976 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 33.541µs
I0908 13:20:36.160660 2962976 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
I0908 13:20:36.160669 2962976 cache.go:107] acquiring lock: {Name:mk8b0387706fadd68f571a10efda673c0c270d63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 13:20:36.160693 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
I0908 13:20:36.160698 2962976 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 30.12µs
I0908 13:20:36.160709 2962976 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
I0908 13:20:36.160719 2962976 cache.go:107] acquiring lock: {Name:mk33ca43d20f07ddc371c694dc9c7a9ebcb088c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 13:20:36.160744 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
I0908 13:20:36.160749 2962976 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 31.663µs
I0908 13:20:36.160754 2962976 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
I0908 13:20:36.160760 2962976 cache.go:87] Successfully saved all images to host disk.
I0908 13:20:36.180376 2962976 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
I0908 13:20:36.180401 2962976 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
I0908 13:20:36.180415 2962976 cache.go:232] Successfully downloaded all kic artifacts
I0908 13:20:36.180445 2962976 start.go:360] acquireMachinesLock for no-preload-978911: {Name:mk7699a0142cc873eeb1530cb26c114199650434 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 13:20:36.180501 2962976 start.go:364] duration metric: took 35.707µs to acquireMachinesLock for "no-preload-978911"
I0908 13:20:36.180529 2962976 start.go:96] Skipping create...Using existing machine configuration
I0908 13:20:36.180538 2962976 fix.go:54] fixHost starting:
I0908 13:20:36.180803 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
I0908 13:20:36.198068 2962976 fix.go:112] recreateIfNeeded on no-preload-978911: state=Stopped err=<nil>
W0908 13:20:36.198097 2962976 fix.go:138] unexpected machine state, will restart: <nil>
I0908 13:20:36.201397 2962976 out.go:252] * Restarting existing docker container for "no-preload-978911" ...
I0908 13:20:36.201532 2962976 cli_runner.go:164] Run: docker start no-preload-978911
I0908 13:20:36.450658 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
I0908 13:20:36.474394 2962976 kic.go:430] container "no-preload-978911" state is running.
I0908 13:20:36.474790 2962976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-978911
I0908 13:20:36.494941 2962976 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/config.json ...
I0908 13:20:36.495172 2962976 machine.go:93] provisionDockerMachine start ...
I0908 13:20:36.495236 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
I0908 13:20:36.514963 2962976 main.go:141] libmachine: Using SSH client type: native
I0908 13:20:36.515301 2962976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil> [] 0s} 127.0.0.1 36733 <nil> <nil>}
I0908 13:20:36.515314 2962976 main.go:141] libmachine: About to run SSH command:
hostname
I0908 13:20:36.515903 2962976 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55582->127.0.0.1:36733: read: connection reset by peer
I0908 13:20:39.637915 2962976 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-978911
I0908 13:20:39.637978 2962976 ubuntu.go:182] provisioning hostname "no-preload-978911"
I0908 13:20:39.638058 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
I0908 13:20:39.656178 2962976 main.go:141] libmachine: Using SSH client type: native
I0908 13:20:39.656497 2962976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil> [] 0s} 127.0.0.1 36733 <nil> <nil>}
I0908 13:20:39.656514 2962976 main.go:141] libmachine: About to run SSH command:
sudo hostname no-preload-978911 && echo "no-preload-978911" | sudo tee /etc/hostname
I0908 13:20:39.795726 2962976 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-978911
I0908 13:20:39.795805 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
I0908 13:20:39.814502 2962976 main.go:141] libmachine: Using SSH client type: native
I0908 13:20:39.814810 2962976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil> [] 0s} 127.0.0.1 36733 <nil> <nil>}
I0908 13:20:39.814835 2962976 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sno-preload-978911' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-978911/g' /etc/hosts;
else
echo '127.0.1.1 no-preload-978911' | sudo tee -a /etc/hosts;
fi
fi
I0908 13:20:39.950789 2962976 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0908 13:20:39.950812 2962976 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21508-2749258/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-2749258/.minikube}
I0908 13:20:39.950834 2962976 ubuntu.go:190] setting up certificates
I0908 13:20:39.950843 2962976 provision.go:84] configureAuth start
I0908 13:20:39.950907 2962976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-978911
I0908 13:20:39.972214 2962976 provision.go:143] copyHostCerts
I0908 13:20:39.972295 2962976 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem, removing ...
I0908 13:20:39.972317 2962976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem
I0908 13:20:39.972393 2962976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem (1082 bytes)
I0908 13:20:39.972496 2962976 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem, removing ...
I0908 13:20:39.972501 2962976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem
I0908 13:20:39.972526 2962976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem (1123 bytes)
I0908 13:20:39.972586 2962976 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem, removing ...
I0908 13:20:39.972591 2962976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem
I0908 13:20:39.972613 2962976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem (1679 bytes)
I0908 13:20:39.972667 2962976 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem org=jenkins.no-preload-978911 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-978911]
I0908 13:20:40.245168 2962976 provision.go:177] copyRemoteCerts
I0908 13:20:40.245243 2962976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0908 13:20:40.245295 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
I0908 13:20:40.263254 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
I0908 13:20:40.355579 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0908 13:20:40.380935 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0908 13:20:40.406306 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0908 13:20:40.432096 2962976 provision.go:87] duration metric: took 481.231644ms to configureAuth
I0908 13:20:40.432126 2962976 ubuntu.go:206] setting minikube options for container-runtime
I0908 13:20:40.432326 2962976 config.go:182] Loaded profile config "no-preload-978911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:20:40.432340 2962976 machine.go:96] duration metric: took 3.937157056s to provisionDockerMachine
I0908 13:20:40.432348 2962976 start.go:293] postStartSetup for "no-preload-978911" (driver="docker")
I0908 13:20:40.432359 2962976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0908 13:20:40.432420 2962976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0908 13:20:40.432470 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
I0908 13:20:40.449780 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
I0908 13:20:40.539192 2962976 ssh_runner.go:195] Run: cat /etc/os-release
I0908 13:20:40.543086 2962976 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0908 13:20:40.543119 2962976 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0908 13:20:40.543129 2962976 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0908 13:20:40.543142 2962976 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0908 13:20:40.543156 2962976 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/addons for local assets ...
I0908 13:20:40.543213 2962976 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/files for local assets ...
I0908 13:20:40.543299 2962976 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem -> 27511142.pem in /etc/ssl/certs
I0908 13:20:40.543407 2962976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0908 13:20:40.552362 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /etc/ssl/certs/27511142.pem (1708 bytes)
I0908 13:20:40.577039 2962976 start.go:296] duration metric: took 144.675775ms for postStartSetup
I0908 13:20:40.577118 2962976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0908 13:20:40.577178 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
I0908 13:20:40.593852 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
I0908 13:20:40.684143 2962976 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0908 13:20:40.688435 2962976 fix.go:56] duration metric: took 4.507889752s for fixHost
I0908 13:20:40.688464 2962976 start.go:83] releasing machines lock for "no-preload-978911", held for 4.507944625s
I0908 13:20:40.688533 2962976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-978911
I0908 13:20:40.705355 2962976 ssh_runner.go:195] Run: cat /version.json
I0908 13:20:40.705419 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
I0908 13:20:40.705609 2962976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0908 13:20:40.705666 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
I0908 13:20:40.727686 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
I0908 13:20:40.735980 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
I0908 13:20:40.822021 2962976 ssh_runner.go:195] Run: systemctl --version
I0908 13:20:40.992138 2962976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0908 13:20:40.996696 2962976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0908 13:20:41.017301 2962976 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0908 13:20:41.017379 2962976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0908 13:20:41.026660 2962976 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0908 13:20:41.026732 2962976 start.go:495] detecting cgroup driver to use...
I0908 13:20:41.026779 2962976 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0908 13:20:41.026849 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0908 13:20:41.041661 2962976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0908 13:20:41.053793 2962976 docker.go:218] disabling cri-docker service (if available) ...
I0908 13:20:41.053929 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0908 13:20:41.068141 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0908 13:20:41.079992 2962976 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0908 13:20:41.158894 2962976 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0908 13:20:41.250698 2962976 docker.go:234] disabling docker service ...
I0908 13:20:41.250775 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0908 13:20:41.265554 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0908 13:20:41.277916 2962976 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0908 13:20:41.368244 2962976 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0908 13:20:41.462413 2962976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0908 13:20:41.475092 2962976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0908 13:20:41.493271 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I0908 13:20:41.505845 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0908 13:20:41.517791 2962976 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0908 13:20:41.517922 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0908 13:20:41.528370 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0908 13:20:41.541165 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0908 13:20:41.551232 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0908 13:20:41.562025 2962976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0908 13:20:41.572196 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0908 13:20:41.582707 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0908 13:20:41.593503 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0908 13:20:41.604600 2962976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0908 13:20:41.614626 2962976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0908 13:20:41.623508 2962976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0908 13:20:41.714072 2962976 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0908 13:20:41.897907 2962976 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0908 13:20:41.898011 2962976 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0908 13:20:41.902605 2962976 start.go:563] Will wait 60s for crictl version
I0908 13:20:41.902693 2962976 ssh_runner.go:195] Run: which crictl
I0908 13:20:41.907099 2962976 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0908 13:20:41.945823 2962976 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.27
RuntimeApiVersion: v1
I0908 13:20:41.945960 2962976 ssh_runner.go:195] Run: containerd --version
I0908 13:20:41.970995 2962976 ssh_runner.go:195] Run: containerd --version
I0908 13:20:42.004794 2962976 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
I0908 13:20:42.019524 2962976 cli_runner.go:164] Run: docker network inspect no-preload-978911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0908 13:20:42.038523 2962976 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0908 13:20:42.042566 2962976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0908 13:20:42.054333 2962976 kubeadm.go:875] updating cluster {Name:no-preload-978911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0908 13:20:42.054518 2962976 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
I0908 13:20:42.054568 2962976 ssh_runner.go:195] Run: sudo crictl images --output json
I0908 13:20:42.107352 2962976 containerd.go:627] all images are preloaded for containerd runtime.
I0908 13:20:42.107387 2962976 cache_images.go:85] Images are preloaded, skipping loading
I0908 13:20:42.107396 2962976 kubeadm.go:926] updating node { 192.168.85.2 8443 v1.34.0 containerd true true} ...
I0908 13:20:42.107557 2962976 kubeadm.go:938] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-978911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0908 13:20:42.107646 2962976 ssh_runner.go:195] Run: sudo crictl info
I0908 13:20:42.191420 2962976 cni.go:84] Creating CNI manager for ""
I0908 13:20:42.191459 2962976 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0908 13:20:42.191472 2962976 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0908 13:20:42.191522 2962976 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-978911 NodeName:no-preload-978911 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0908 13:20:42.191696 2962976 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "no-preload-978911"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0908 13:20:42.191812 2962976 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
I0908 13:20:42.205135 2962976 binaries.go:44] Found k8s binaries, skipping transfer
I0908 13:20:42.205221 2962976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0908 13:20:42.217434 2962976 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
I0908 13:20:42.248179 2962976 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0908 13:20:42.275286 2962976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
I0908 13:20:42.308117 2962976 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0908 13:20:42.312692 2962976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0908 13:20:42.326442 2962976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0908 13:20:42.421795 2962976 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0908 13:20:42.436584 2962976 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911 for IP: 192.168.85.2
I0908 13:20:42.436607 2962976 certs.go:194] generating shared ca certs ...
I0908 13:20:42.436625 2962976 certs.go:226] acquiring lock for ca certs: {Name:mka64c3c41f67c038c6cf0d4d20f2375b7abe78c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0908 13:20:42.436807 2962976 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key
I0908 13:20:42.436928 2962976 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key
I0908 13:20:42.436943 2962976 certs.go:256] generating profile certs ...
I0908 13:20:42.437066 2962976 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.key
I0908 13:20:42.437162 2962976 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/apiserver.key.7fd60a6c
I0908 13:20:42.437238 2962976 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/proxy-client.key
I0908 13:20:42.437393 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem (1338 bytes)
W0908 13:20:42.437445 2962976 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114_empty.pem, impossibly tiny 0 bytes
I0908 13:20:42.437460 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem (1679 bytes)
I0908 13:20:42.437491 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem (1082 bytes)
I0908 13:20:42.437542 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem (1123 bytes)
I0908 13:20:42.437581 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem (1679 bytes)
I0908 13:20:42.437641 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem (1708 bytes)
I0908 13:20:42.438302 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0908 13:20:42.466909 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0908 13:20:42.494218 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0908 13:20:42.521632 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0908 13:20:42.551803 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0908 13:20:42.582057 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0908 13:20:42.614465 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0908 13:20:42.652857 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0908 13:20:42.682624 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /usr/share/ca-certificates/27511142.pem (1708 bytes)
I0908 13:20:42.708216 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0908 13:20:42.734706 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem --> /usr/share/ca-certificates/2751114.pem (1338 bytes)
I0908 13:20:42.761119 2962976 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0908 13:20:42.781025 2962976 ssh_runner.go:195] Run: openssl version
I0908 13:20:42.787728 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0908 13:20:42.797410 2962976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0908 13:20:42.801102 2962976 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 8 12:33 /usr/share/ca-certificates/minikubeCA.pem
I0908 13:20:42.801201 2962976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0908 13:20:42.808137 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0908 13:20:42.817759 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2751114.pem && ln -fs /usr/share/ca-certificates/2751114.pem /etc/ssl/certs/2751114.pem"
I0908 13:20:42.827168 2962976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2751114.pem
I0908 13:20:42.831084 2962976 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 8 12:41 /usr/share/ca-certificates/2751114.pem
I0908 13:20:42.831147 2962976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2751114.pem
I0908 13:20:42.838272 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2751114.pem /etc/ssl/certs/51391683.0"
I0908 13:20:42.847707 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27511142.pem && ln -fs /usr/share/ca-certificates/27511142.pem /etc/ssl/certs/27511142.pem"
I0908 13:20:42.859460 2962976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27511142.pem
I0908 13:20:42.863196 2962976 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 8 12:41 /usr/share/ca-certificates/27511142.pem
I0908 13:20:42.863282 2962976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27511142.pem
I0908 13:20:42.870770 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27511142.pem /etc/ssl/certs/3ec20f2e.0"
I0908 13:20:42.881017 2962976 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0908 13:20:42.884978 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0908 13:20:42.894328 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0908 13:20:42.901729 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0908 13:20:42.909115 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0908 13:20:42.916106 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0908 13:20:42.923024 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0908 13:20:42.930253 2962976 kubeadm.go:392] StartCluster: {Name:no-preload-978911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0908 13:20:42.930382 2962976 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0908 13:20:42.930445 2962976 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0908 13:20:42.967887 2962976 cri.go:89] found id: "bdd85fd62e90072711cf66c0db968c1136a28f624fc072df147df0fc494584c8"
I0908 13:20:42.967912 2962976 cri.go:89] found id: "e685e05b08b51e456163cacd1644bf5fab5dd1c6118ed288241b851f6da29e62"
I0908 13:20:42.967917 2962976 cri.go:89] found id: "aa297480aa1b27d3b15093502059992dfea640300a65451e6f3db7b6b056ed1a"
I0908 13:20:42.967923 2962976 cri.go:89] found id: "d5252e4ac54a43b7539b2bfe24a8a0183a6b9420e5f2255895a872dd266dfbdd"
I0908 13:20:42.967927 2962976 cri.go:89] found id: "f5e8fe9a2b29ca8f991932c0c60513abc177286d77ac00c6ac9f77de28096302"
I0908 13:20:42.967933 2962976 cri.go:89] found id: "e59a4771913f0c586033aa2f970d5003227c9262bc5c73b7ef6007c8ab2801a0"
I0908 13:20:42.967937 2962976 cri.go:89] found id: "453e5e825289a6e70e8cee4d4d3e9be4fa57968b9f3101e0486c55f00773e336"
I0908 13:20:42.967962 2962976 cri.go:89] found id: "89cf83ed06352d9266afeb8d98085daf1e7cc6dfe5636d2a24ff0d4804025f62"
I0908 13:20:42.967970 2962976 cri.go:89] found id: ""
I0908 13:20:42.968033 2962976 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0908 13:20:42.983189 2962976 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-09-08T13:20:42Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0908 13:20:42.983277 2962976 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0908 13:20:42.992272 2962976 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0908 13:20:42.992292 2962976 kubeadm.go:589] restartPrimaryControlPlane start ...
I0908 13:20:42.992372 2962976 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0908 13:20:43.001397 2962976 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0908 13:20:43.002879 2962976 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-978911" does not appear in /home/jenkins/minikube-integration/21508-2749258/kubeconfig
I0908 13:20:43.003451 2962976 kubeconfig.go:62] /home/jenkins/minikube-integration/21508-2749258/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-978911" cluster setting kubeconfig missing "no-preload-978911" context setting]
I0908 13:20:43.004375 2962976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0908 13:20:43.009201 2962976 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0908 13:20:43.018628 2962976 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
I0908 13:20:43.018703 2962976 kubeadm.go:593] duration metric: took 26.405085ms to restartPrimaryControlPlane
I0908 13:20:43.018720 2962976 kubeadm.go:394] duration metric: took 88.476669ms to StartCluster
I0908 13:20:43.018749 2962976 settings.go:142] acquiring lock: {Name:mk4a46c455122873706b4d72c01ce6416a89153c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0908 13:20:43.018813 2962976 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21508-2749258/kubeconfig
I0908 13:20:43.019719 2962976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0908 13:20:43.019921 2962976 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0908 13:20:43.020275 2962976 config.go:182] Loaded profile config "no-preload-978911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:20:43.020343 2962976 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0908 13:20:43.020435 2962976 addons.go:69] Setting storage-provisioner=true in profile "no-preload-978911"
I0908 13:20:43.020455 2962976 addons.go:238] Setting addon storage-provisioner=true in "no-preload-978911"
W0908 13:20:43.020466 2962976 addons.go:247] addon storage-provisioner should already be in state true
I0908 13:20:43.020459 2962976 addons.go:69] Setting dashboard=true in profile "no-preload-978911"
I0908 13:20:43.020534 2962976 addons.go:238] Setting addon dashboard=true in "no-preload-978911"
W0908 13:20:43.020568 2962976 addons.go:247] addon dashboard should already be in state true
I0908 13:20:43.020608 2962976 host.go:66] Checking if "no-preload-978911" exists ...
I0908 13:20:43.020489 2962976 host.go:66] Checking if "no-preload-978911" exists ...
I0908 13:20:43.021413 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
I0908 13:20:43.021413 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
I0908 13:20:43.020494 2962976 addons.go:69] Setting metrics-server=true in profile "no-preload-978911"
I0908 13:20:43.022090 2962976 addons.go:238] Setting addon metrics-server=true in "no-preload-978911"
W0908 13:20:43.022105 2962976 addons.go:247] addon metrics-server should already be in state true
I0908 13:20:43.022137 2962976 host.go:66] Checking if "no-preload-978911" exists ...
I0908 13:20:43.022666 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
I0908 13:20:43.020436 2962976 addons.go:69] Setting default-storageclass=true in profile "no-preload-978911"
I0908 13:20:43.024278 2962976 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-978911"
I0908 13:20:43.025361 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
I0908 13:20:43.026372 2962976 out.go:179] * Verifying Kubernetes components...
I0908 13:20:43.030861 2962976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0908 13:20:43.087484 2962976 addons.go:238] Setting addon default-storageclass=true in "no-preload-978911"
W0908 13:20:43.087508 2962976 addons.go:247] addon default-storageclass should already be in state true
I0908 13:20:43.087533 2962976 host.go:66] Checking if "no-preload-978911" exists ...
I0908 13:20:43.087950 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
I0908 13:20:43.102706 2962976 out.go:179] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0908 13:20:43.102748 2962976 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0908 13:20:43.102758 2962976 out.go:179] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0908 13:20:43.105724 2962976 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0908 13:20:43.105749 2962976 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0908 13:20:43.105773 2962976 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0908 13:20:43.105788 2962976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0908 13:20:43.105823 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
I0908 13:20:43.105850 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
I0908 13:20:43.115360 2962976 out.go:179] - Using image registry.k8s.io/echoserver:1.4
I0908 13:20:43.118627 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0908 13:20:43.118650 2962976 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0908 13:20:43.118715 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
I0908 13:20:43.136790 2962976 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0908 13:20:43.136812 2962976 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0908 13:20:43.136876 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
I0908 13:20:43.175291 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
I0908 13:20:43.191796 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
I0908 13:20:43.193563 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
I0908 13:20:43.208474 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
I0908 13:20:43.247091 2962976 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0908 13:20:43.292511 2962976 node_ready.go:35] waiting up to 6m0s for node "no-preload-978911" to be "Ready" ...
I0908 13:20:43.382311 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0908 13:20:43.421803 2962976 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0908 13:20:43.421878 2962976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0908 13:20:43.458209 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0908 13:20:43.458297 2962976 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0908 13:20:43.473379 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0908 13:20:43.552131 2962976 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0908 13:20:43.552207 2962976 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0908 13:20:43.560638 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0908 13:20:43.560711 2962976 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0908 13:20:43.624453 2962976 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0908 13:20:43.624479 2962976 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0908 13:20:43.699654 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0908 13:20:43.699676 2962976 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0908 13:20:43.771524 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0908 13:20:43.785451 2962976 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0908 13:20:43.785495 2962976 retry.go:31] will retry after 247.912555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
W0908 13:20:43.785541 2962976 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0908 13:20:43.785547 2962976 retry.go:31] will retry after 303.088ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0908 13:20:43.834968 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0908 13:20:43.835041 2962976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0908 13:20:43.961116 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0908 13:20:43.961144 2962976 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0908 13:20:44.033768 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0908 13:20:44.089120 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0908 13:20:44.174947 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0908 13:20:44.174976 2962976 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0908 13:20:44.350161 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0908 13:20:44.350203 2962976 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0908 13:20:44.457915 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0908 13:20:44.457956 2962976 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0908 13:20:44.496024 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0908 13:20:44.496054 2962976 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0908 13:20:44.520347 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0908 13:20:48.489246 2962976 node_ready.go:49] node "no-preload-978911" is "Ready"
I0908 13:20:48.489276 2962976 node_ready.go:38] duration metric: took 5.196680279s for node "no-preload-978911" to be "Ready" ...
I0908 13:20:48.489290 2962976 api_server.go:52] waiting for apiserver process to appear ...
I0908 13:20:48.489355 2962976 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0908 13:20:51.238545 2962976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.466984527s)
I0908 13:20:51.238585 2962976 addons.go:479] Verifying addon metrics-server=true in "no-preload-978911"
I0908 13:20:51.400235 2962976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.366422544s)
I0908 13:20:51.400286 2962976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.311136503s)
I0908 13:20:51.400525 2962976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.880137919s)
I0908 13:20:51.400723 2962976 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.911337726s)
I0908 13:20:51.400744 2962976 api_server.go:72] duration metric: took 8.380794855s to wait for apiserver process to appear ...
I0908 13:20:51.400750 2962976 api_server.go:88] waiting for apiserver healthz status ...
I0908 13:20:51.400766 2962976 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0908 13:20:51.403695 2962976 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-978911 addons enable metrics-server
I0908 13:20:51.409743 2962976 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I0908 13:20:51.411883 2962976 api_server.go:141] control plane version: v1.34.0
I0908 13:20:51.411916 2962976 api_server.go:131] duration metric: took 11.159718ms to wait for apiserver health ...
I0908 13:20:51.411925 2962976 system_pods.go:43] waiting for kube-system pods to appear ...
I0908 13:20:51.414126 2962976 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
I0908 13:20:51.415529 2962976 system_pods.go:59] 9 kube-system pods found
I0908 13:20:51.415567 2962976 system_pods.go:61] "coredns-66bc5c9577-7www8" [cb6a614e-8f35-46f4-957d-04268f222190] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0908 13:20:51.415576 2962976 system_pods.go:61] "etcd-no-preload-978911" [4e38fee5-f757-4ee8-a97f-c76e4b633559] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0908 13:20:51.415587 2962976 system_pods.go:61] "kindnet-8cc7v" [3da2f7cd-76d4-456a-8cc8-069d4c2405a6] Running
I0908 13:20:51.415596 2962976 system_pods.go:61] "kube-apiserver-no-preload-978911" [44a03487-7993-4879-9ab7-88227004b4f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0908 13:20:51.415617 2962976 system_pods.go:61] "kube-controller-manager-no-preload-978911" [7a8863ca-4835-46f7-9529-dd33b2a669f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0908 13:20:51.415626 2962976 system_pods.go:61] "kube-proxy-zb84d" [05431e58-3897-4783-899f-e079efa82e52] Running
I0908 13:20:51.415636 2962976 system_pods.go:61] "kube-scheduler-no-preload-978911" [3d53a214-024a-4b7c-9500-23b47958a0c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0908 13:20:51.415643 2962976 system_pods.go:61] "metrics-server-746fcd58dc-vh962" [959e88f4-10f0-4c5b-98da-0451d012b212] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0908 13:20:51.415653 2962976 system_pods.go:61] "storage-provisioner" [062103e0-2e60-4495-84fe-e00955426335] Running
I0908 13:20:51.415659 2962976 system_pods.go:74] duration metric: took 3.729531ms to wait for pod list to return data ...
I0908 13:20:51.415666 2962976 default_sa.go:34] waiting for default service account to be created ...
I0908 13:20:51.417731 2962976 addons.go:514] duration metric: took 8.397368128s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
I0908 13:20:51.418437 2962976 default_sa.go:45] found service account: "default"
I0908 13:20:51.418459 2962976 default_sa.go:55] duration metric: took 2.786524ms for default service account to be created ...
I0908 13:20:51.418468 2962976 system_pods.go:116] waiting for k8s-apps to be running ...
I0908 13:20:51.421224 2962976 system_pods.go:86] 9 kube-system pods found
I0908 13:20:51.421265 2962976 system_pods.go:89] "coredns-66bc5c9577-7www8" [cb6a614e-8f35-46f4-957d-04268f222190] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0908 13:20:51.421274 2962976 system_pods.go:89] "etcd-no-preload-978911" [4e38fee5-f757-4ee8-a97f-c76e4b633559] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0908 13:20:51.421281 2962976 system_pods.go:89] "kindnet-8cc7v" [3da2f7cd-76d4-456a-8cc8-069d4c2405a6] Running
I0908 13:20:51.421293 2962976 system_pods.go:89] "kube-apiserver-no-preload-978911" [44a03487-7993-4879-9ab7-88227004b4f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0908 13:20:51.421309 2962976 system_pods.go:89] "kube-controller-manager-no-preload-978911" [7a8863ca-4835-46f7-9529-dd33b2a669f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0908 13:20:51.421314 2962976 system_pods.go:89] "kube-proxy-zb84d" [05431e58-3897-4783-899f-e079efa82e52] Running
I0908 13:20:51.421321 2962976 system_pods.go:89] "kube-scheduler-no-preload-978911" [3d53a214-024a-4b7c-9500-23b47958a0c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0908 13:20:51.421331 2962976 system_pods.go:89] "metrics-server-746fcd58dc-vh962" [959e88f4-10f0-4c5b-98da-0451d012b212] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0908 13:20:51.421336 2962976 system_pods.go:89] "storage-provisioner" [062103e0-2e60-4495-84fe-e00955426335] Running
I0908 13:20:51.421344 2962976 system_pods.go:126] duration metric: took 2.87014ms to wait for k8s-apps to be running ...
I0908 13:20:51.421353 2962976 system_svc.go:44] waiting for kubelet service to be running ....
I0908 13:20:51.421410 2962976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0908 13:20:51.436310 2962976 system_svc.go:56] duration metric: took 14.944353ms WaitForService to wait for kubelet
I0908 13:20:51.436337 2962976 kubeadm.go:578] duration metric: took 8.416385263s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0908 13:20:51.436356 2962976 node_conditions.go:102] verifying NodePressure condition ...
I0908 13:20:51.439945 2962976 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0908 13:20:51.440017 2962976 node_conditions.go:123] node cpu capacity is 2
I0908 13:20:51.440045 2962976 node_conditions.go:105] duration metric: took 3.683559ms to run NodePressure ...
I0908 13:20:51.440074 2962976 start.go:241] waiting for startup goroutines ...
I0908 13:20:51.440108 2962976 start.go:246] waiting for cluster config update ...
I0908 13:20:51.440137 2962976 start.go:255] writing updated cluster config ...
I0908 13:20:51.440471 2962976 ssh_runner.go:195] Run: rm -f paused
I0908 13:20:51.443803 2962976 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I0908 13:20:51.448363 2962976 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7www8" in "kube-system" namespace to be "Ready" or be gone ...
W0908 13:20:53.454728 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
W0908 13:20:55.456275 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
W0908 13:20:57.954301 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
W0908 13:20:59.954409 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
W0908 13:21:02.454336 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
W0908 13:21:04.954288 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
W0908 13:21:06.954819 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
W0908 13:21:09.453453 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
W0908 13:21:11.453986 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
W0908 13:21:13.454640 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
W0908 13:21:15.454718 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
W0908 13:21:17.953833 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
W0908 13:21:19.953875 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
W0908 13:21:21.954243 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
W0908 13:21:23.954403 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
W0908 13:21:26.453994 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
I0908 13:21:27.454550 2962976 pod_ready.go:94] pod "coredns-66bc5c9577-7www8" is "Ready"
I0908 13:21:27.454580 2962976 pod_ready.go:86] duration metric: took 36.006192784s for pod "coredns-66bc5c9577-7www8" in "kube-system" namespace to be "Ready" or be gone ...
I0908 13:21:27.457609 2962976 pod_ready.go:83] waiting for pod "etcd-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
I0908 13:21:27.462474 2962976 pod_ready.go:94] pod "etcd-no-preload-978911" is "Ready"
I0908 13:21:27.462506 2962976 pod_ready.go:86] duration metric: took 4.86819ms for pod "etcd-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
I0908 13:21:27.464750 2962976 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
I0908 13:21:27.474261 2962976 pod_ready.go:94] pod "kube-apiserver-no-preload-978911" is "Ready"
I0908 13:21:27.474285 2962976 pod_ready.go:86] duration metric: took 9.508793ms for pod "kube-apiserver-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
I0908 13:21:27.477518 2962976 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
I0908 13:21:27.657753 2962976 pod_ready.go:94] pod "kube-controller-manager-no-preload-978911" is "Ready"
I0908 13:21:27.657786 2962976 pod_ready.go:86] duration metric: took 180.242988ms for pod "kube-controller-manager-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
I0908 13:21:27.853931 2962976 pod_ready.go:83] waiting for pod "kube-proxy-zb84d" in "kube-system" namespace to be "Ready" or be gone ...
I0908 13:21:28.252014 2962976 pod_ready.go:94] pod "kube-proxy-zb84d" is "Ready"
I0908 13:21:28.252038 2962976 pod_ready.go:86] duration metric: took 398.080343ms for pod "kube-proxy-zb84d" in "kube-system" namespace to be "Ready" or be gone ...
I0908 13:21:28.451995 2962976 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
I0908 13:21:28.852776 2962976 pod_ready.go:94] pod "kube-scheduler-no-preload-978911" is "Ready"
I0908 13:21:28.852805 2962976 pod_ready.go:86] duration metric: took 400.781462ms for pod "kube-scheduler-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
I0908 13:21:28.852820 2962976 pod_ready.go:40] duration metric: took 37.408986235s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I0908 13:21:28.914752 2962976 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
I0908 13:21:28.918052 2962976 out.go:179] * Done! kubectl is now configured to use "no-preload-978911" cluster and "default" namespace by default
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
9f55ee77b42a1 523cad1a4df73 3 minutes ago Exited dashboard-metrics-scraper 6 35541ea28d6e4 dashboard-metrics-scraper-5f989dc9cf-fhtcn
0b74b004307ae ba04bb24b9575 8 minutes ago Running storage-provisioner 2 bbe423ed58299 storage-provisioner
6b1f4f786b29f 1611cd07b61d5 9 minutes ago Running busybox 1 5d758b6ec40e7 busybox
b5ae572e6c321 97e04611ad434 9 minutes ago Running coredns 1 488eef1731825 coredns-5dd5756b68-d78mw
873d0865e41ef b1a8c6f707935 9 minutes ago Running kindnet-cni 1 8475d2f244eb3 kindnet-hc6xz
105ff83200e38 940f54a5bcae9 9 minutes ago Running kube-proxy 1 a2b9277fc3436 kube-proxy-5lcjb
66645eab9b879 ba04bb24b9575 9 minutes ago Exited storage-provisioner 1 bbe423ed58299 storage-provisioner
a472dcf368c18 9cdd6470f48c8 9 minutes ago Running etcd 1 dd9040d73d36b etcd-old-k8s-version-552634
306f060aeefe6 46cc66ccc7c19 9 minutes ago Running kube-controller-manager 1 fb5b01c16af98 kube-controller-manager-old-k8s-version-552634
f99ef8a528998 762dce4090c5f 9 minutes ago Running kube-scheduler 1 46f9a80d026b2 kube-scheduler-old-k8s-version-552634
ad5401098ad61 00543d2fe5d71 9 minutes ago Running kube-apiserver 1 5aa721eca188e kube-apiserver-old-k8s-version-552634
8b4252d29a3c9 1611cd07b61d5 10 minutes ago Exited busybox 0 f9cb69407935e busybox
887f29bb1a772 97e04611ad434 10 minutes ago Exited coredns 0 a553ca9be588b coredns-5dd5756b68-d78mw
9895c6c404f91 b1a8c6f707935 10 minutes ago Exited kindnet-cni 0 adf66c46e22c7 kindnet-hc6xz
ebc5022b0aeaa 940f54a5bcae9 10 minutes ago Exited kube-proxy 0 37c9c0bd19c4a kube-proxy-5lcjb
12a8c02c281d2 00543d2fe5d71 11 minutes ago Exited kube-apiserver 0 0747d2824c491 kube-apiserver-old-k8s-version-552634
56c17c12d8122 762dce4090c5f 11 minutes ago Exited kube-scheduler 0 3ca96ef5cfac0 kube-scheduler-old-k8s-version-552634
86ee799990106 9cdd6470f48c8 11 minutes ago Exited etcd 0 59bfcc3d2aaac etcd-old-k8s-version-552634
cbd09fa5b3a5f 46cc66ccc7c19 11 minutes ago Exited kube-controller-manager 0 b6c9cd52874d0 kube-controller-manager-old-k8s-version-552634
==> containerd <==
Sep 08 13:21:48 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:21:48.720503860Z" level=info msg="StartContainer for \"5a57f510be2911337c5ddbb1da93d2a36688b86b47eba465fe6c26a7d2cb6ef9\" returns successfully"
Sep 08 13:21:48 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:21:48.743210478Z" level=info msg="shim disconnected" id=5a57f510be2911337c5ddbb1da93d2a36688b86b47eba465fe6c26a7d2cb6ef9 namespace=k8s.io
Sep 08 13:21:48 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:21:48.743249746Z" level=warning msg="cleaning up after shim disconnected" id=5a57f510be2911337c5ddbb1da93d2a36688b86b47eba465fe6c26a7d2cb6ef9 namespace=k8s.io
Sep 08 13:21:48 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:21:48.743287678Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep 08 13:21:49 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:21:49.707350997Z" level=info msg="RemoveContainer for \"857cc59bb78b32d73b8ad3fab568dd8478c2b8c176843fb16b06a532aebd3f19\""
Sep 08 13:21:49 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:21:49.717812552Z" level=info msg="RemoveContainer for \"857cc59bb78b32d73b8ad3fab568dd8478c2b8c176843fb16b06a532aebd3f19\" returns successfully"
Sep 08 13:24:11 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:11.637646503Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Sep 08 13:24:11 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:11.643159435Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Sep 08 13:24:11 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:11.645226488Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Sep 08 13:24:11 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:11.645259406Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Sep 08 13:24:25 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:25.638199290Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
Sep 08 13:24:25 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:25.640504433Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Sep 08 13:24:25 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:25.765136666Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Sep 08 13:24:26 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:26.072530145Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 13:24:26 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:26.072569701Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
Sep 08 13:24:37 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:37.638891597Z" level=info msg="CreateContainer within sandbox \"35541ea28d6e4cdb4992bca1189dbb99418187d926c8d278cf72f8d44e4f8809\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,}"
Sep 08 13:24:37 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:37.657597012Z" level=info msg="CreateContainer within sandbox \"35541ea28d6e4cdb4992bca1189dbb99418187d926c8d278cf72f8d44e4f8809\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,} returns container id \"9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e\""
Sep 08 13:24:37 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:37.658461251Z" level=info msg="StartContainer for \"9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e\""
Sep 08 13:24:37 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:37.713810519Z" level=info msg="StartContainer for \"9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e\" returns successfully"
Sep 08 13:24:37 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:37.716770372Z" level=info msg="received exit event container_id:\"9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e\" id:\"9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e\" pid:2826 exit_status:255 exited_at:{seconds:1757337877 nanos:716474437}"
Sep 08 13:24:37 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:37.742866419Z" level=info msg="shim disconnected" id=9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e namespace=k8s.io
Sep 08 13:24:37 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:37.742903243Z" level=warning msg="cleaning up after shim disconnected" id=9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e namespace=k8s.io
Sep 08 13:24:37 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:37.742941248Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep 08 13:24:38 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:38.090288585Z" level=info msg="RemoveContainer for \"5a57f510be2911337c5ddbb1da93d2a36688b86b47eba465fe6c26a7d2cb6ef9\""
Sep 08 13:24:38 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:38.097504912Z" level=info msg="RemoveContainer for \"5a57f510be2911337c5ddbb1da93d2a36688b86b47eba465fe6c26a7d2cb6ef9\" returns successfully"
==> coredns [887f29bb1a772ea77ca331bb52f5acf91f88e07e5ede3c3a3a74a6959bc2d4e5] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
CoreDNS-1.10.1
linux/arm64, go1.20, 055b2c3
[INFO] 127.0.0.1:52648 - 59601 "HINFO IN 4276947130458500050.8211269930752862866. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027626934s
==> coredns [b5ae572e6c3217fbcd8a8a6bb2451f35b959e4517116e7b1d056ad2e30ede111] <==
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
CoreDNS-1.10.1
linux/arm64, go1.20, 055b2c3
[INFO] 127.0.0.1:40339 - 53587 "HINFO IN 3405231602673676994.1114408218903977437. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.065848228s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
==> describe nodes <==
Name: old-k8s-version-552634
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-552634
kubernetes.io/os=linux
minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
minikube.k8s.io/name=old-k8s-version-552634
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_09_08T13_16_55_0700
minikube.k8s.io/version=v1.36.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 08 Sep 2025 13:16:50 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-552634
AcquireTime: <unset>
RenewTime: Mon, 08 Sep 2025 13:27:47 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 08 Sep 2025 13:23:51 +0000 Mon, 08 Sep 2025 13:16:47 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 08 Sep 2025 13:23:51 +0000 Mon, 08 Sep 2025 13:16:47 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 08 Sep 2025 13:23:51 +0000 Mon, 08 Sep 2025 13:16:47 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 08 Sep 2025 13:23:51 +0000 Mon, 08 Sep 2025 13:17:04 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-552634
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
System Info:
Machine ID: 18ee3a277fa24969a47806fabfc259c0
System UUID: 637c7b27-ceff-4552-8bc2-a5a52de7b8d9
Boot ID: 9f5228b8-b58e-4b72-938a-84f5f7e9d841
Kernel Version: 5.15.0-1084-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.27
Kubelet Version: v1.28.0
Kube-Proxy Version: v1.28.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system coredns-5dd5756b68-d78mw 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 10m
kube-system etcd-old-k8s-version-552634 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 10m
kube-system kindnet-hc6xz 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 10m
kube-system kube-apiserver-old-k8s-version-552634 250m (12%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system kube-controller-manager-old-k8s-version-552634 200m (10%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system kube-proxy-5lcjb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system kube-scheduler-old-k8s-version-552634 100m (5%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system metrics-server-57f55c9bc5-ppxnd 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 10m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kubernetes-dashboard dashboard-metrics-scraper-5f989dc9cf-fhtcn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m25s
kubernetes-dashboard kubernetes-dashboard-8694d4445c-2x25h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m25s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 10m kube-proxy
Normal Starting 9m35s kube-proxy
Normal Starting 10m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 10m kubelet Node old-k8s-version-552634 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 10m kubelet Node old-k8s-version-552634 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 10m kubelet Node old-k8s-version-552634 status is now: NodeHasSufficientPID
Normal NodeNotReady 10m kubelet Node old-k8s-version-552634 status is now: NodeNotReady
Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods
Normal NodeReady 10m kubelet Node old-k8s-version-552634 status is now: NodeReady
Normal RegisteredNode 10m node-controller Node old-k8s-version-552634 event: Registered Node old-k8s-version-552634 in Controller
Normal Starting 9m43s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 9m43s (x8 over 9m43s) kubelet Node old-k8s-version-552634 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m43s (x8 over 9m43s) kubelet Node old-k8s-version-552634 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9m43s (x7 over 9m43s) kubelet Node old-k8s-version-552634 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 9m43s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 9m25s node-controller Node old-k8s-version-552634 event: Registered Node old-k8s-version-552634 in Controller
==> dmesg <==
[Sep 8 12:32] kauditd_printk_skb: 8 callbacks suppressed
==> etcd [86ee7999901068cbb878838a61a92c5a7f51e9f4bcca6f825a9580a81d698726] <==
{"level":"info","ts":"2025-09-08T13:16:46.858909Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2025-09-08T13:16:46.859245Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2025-09-08T13:16:46.859325Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2025-09-08T13:16:46.860514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
{"level":"info","ts":"2025-09-08T13:16:46.861226Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
{"level":"info","ts":"2025-09-08T13:16:46.861291Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
{"level":"info","ts":"2025-09-08T13:16:46.862576Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
{"level":"info","ts":"2025-09-08T13:16:47.416142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
{"level":"info","ts":"2025-09-08T13:16:47.416189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
{"level":"info","ts":"2025-09-08T13:16:47.416218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
{"level":"info","ts":"2025-09-08T13:16:47.416369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
{"level":"info","ts":"2025-09-08T13:16:47.416482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
{"level":"info","ts":"2025-09-08T13:16:47.416574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
{"level":"info","ts":"2025-09-08T13:16:47.416667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
{"level":"info","ts":"2025-09-08T13:16:47.419515Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2025-09-08T13:16:47.420115Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-552634 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
{"level":"info","ts":"2025-09-08T13:16:47.420267Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-09-08T13:16:47.421471Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
{"level":"info","ts":"2025-09-08T13:16:47.421887Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
{"level":"info","ts":"2025-09-08T13:16:47.421996Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2025-09-08T13:16:47.422057Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2025-09-08T13:16:47.422206Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-09-08T13:16:47.43072Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2025-09-08T13:16:47.432069Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-09-08T13:16:47.432222Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
==> etcd [a472dcf368c18f5e6a5223534ab4707aba16f0f2f8f1d2a0a9e7ffbfa099c6a6] <==
{"level":"info","ts":"2025-09-08T13:18:10.276051Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2025-09-08T13:18:10.276148Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2025-09-08T13:18:10.276473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
{"level":"info","ts":"2025-09-08T13:18:10.276653Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
{"level":"info","ts":"2025-09-08T13:18:10.276861Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
{"level":"info","ts":"2025-09-08T13:18:10.277011Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2025-09-08T13:18:10.300492Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2025-09-08T13:18:10.305872Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
{"level":"info","ts":"2025-09-08T13:18:10.306073Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
{"level":"info","ts":"2025-09-08T13:18:10.306477Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2025-09-08T13:18:10.307823Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2025-09-08T13:18:11.239821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
{"level":"info","ts":"2025-09-08T13:18:11.240087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
{"level":"info","ts":"2025-09-08T13:18:11.24026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
{"level":"info","ts":"2025-09-08T13:18:11.240353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
{"level":"info","ts":"2025-09-08T13:18:11.240432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
{"level":"info","ts":"2025-09-08T13:18:11.240522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
{"level":"info","ts":"2025-09-08T13:18:11.240598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
{"level":"info","ts":"2025-09-08T13:18:11.242592Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-552634 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
{"level":"info","ts":"2025-09-08T13:18:11.242762Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-09-08T13:18:11.244005Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2025-09-08T13:18:11.242819Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-09-08T13:18:11.255565Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
{"level":"info","ts":"2025-09-08T13:18:11.258399Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-09-08T13:18:11.258515Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
==> kernel <==
13:27:52 up 19:10, 0 users, load average: 0.20, 0.88, 1.86
Linux old-k8s-version-552634 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [873d0865e41efef1cdc5723e8effa5359186450c3fdc15bde735024a70d67f7a] <==
I0908 13:25:46.609187 1 main.go:301] handling current node
I0908 13:25:56.612320 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0908 13:25:56.612356 1 main.go:301] handling current node
I0908 13:26:06.615234 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0908 13:26:06.615270 1 main.go:301] handling current node
I0908 13:26:16.608893 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0908 13:26:16.608926 1 main.go:301] handling current node
I0908 13:26:26.615094 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0908 13:26:26.615145 1 main.go:301] handling current node
I0908 13:26:36.617240 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0908 13:26:36.617275 1 main.go:301] handling current node
I0908 13:26:46.608719 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0908 13:26:46.608766 1 main.go:301] handling current node
I0908 13:26:56.612233 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0908 13:26:56.612265 1 main.go:301] handling current node
I0908 13:27:06.616655 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0908 13:27:06.616687 1 main.go:301] handling current node
I0908 13:27:16.608120 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0908 13:27:16.608153 1 main.go:301] handling current node
I0908 13:27:26.609167 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0908 13:27:26.609203 1 main.go:301] handling current node
I0908 13:27:36.613741 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0908 13:27:36.613782 1 main.go:301] handling current node
I0908 13:27:46.609146 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0908 13:27:46.609206 1 main.go:301] handling current node
==> kindnet [9895c6c404f918357f3fe8f891a3b387606c5f693ab288d576f52f4f6ff3214f] <==
I0908 13:17:10.107287 1 main.go:109] connected to apiserver: https://10.96.0.1:443
I0908 13:17:10.107858 1 main.go:139] hostIP = 192.168.76.2
podIP = 192.168.76.2
I0908 13:17:10.208007 1 main.go:148] setting mtu 1500 for CNI
I0908 13:17:10.208039 1 main.go:178] kindnetd IP family: "ipv4"
I0908 13:17:10.208056 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
time="2025-09-08T13:17:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
I0908 13:17:10.399790 1 controller.go:377] "Starting controller" name="kube-network-policies"
I0908 13:17:10.399870 1 controller.go:381] "Waiting for informer caches to sync"
I0908 13:17:10.399897 1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
I0908 13:17:10.402060 1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
I0908 13:17:10.601314 1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
I0908 13:17:10.601444 1 metrics.go:72] Registering metrics
I0908 13:17:10.601636 1 controller.go:711] "Syncing nftables rules"
I0908 13:17:20.403628 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0908 13:17:20.403684 1 main.go:301] handling current node
I0908 13:17:30.404510 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0908 13:17:30.404697 1 main.go:301] handling current node
I0908 13:17:40.399112 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0908 13:17:40.399145 1 main.go:301] handling current node
==> kube-apiserver [12a8c02c281d2079f1f0b5cb46532c15ceef81c18c7ee4d11f73a0a60044feaf] <==
I0908 13:16:53.898716 1 controller.go:624] quota admission added evaluator for: deployments.apps
I0908 13:16:53.921523 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
I0908 13:16:53.933698 1 controller.go:624] quota admission added evaluator for: daemonsets.apps
http2: server: error reading preface from client 192.168.76.2:33400: read tcp 192.168.76.2:8443->192.168.76.2:33400: read: connection reset by peer
I0908 13:17:06.498189 1 controller.go:624] quota admission added evaluator for: replicasets.apps
I0908 13:17:06.692514 1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
W0908 13:17:48.243423 1 handler_proxy.go:93] no RequestInfo found in the context
E0908 13:17:48.243491 1 controller.go:135] adding "v1beta1.metrics.k8s.io" to AggregationController failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0908 13:17:48.243950 1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
I0908 13:17:48.244200 1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
W0908 13:17:48.252554 1 handler_proxy.go:93] no RequestInfo found in the context
E0908 13:17:48.252623 1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
E0908 13:17:48.252661 1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
I0908 13:17:48.252685 1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
I0908 13:17:48.252695 1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
I0908 13:17:48.417382 1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.99.6.154"}
W0908 13:17:48.439844 1 handler_proxy.go:93] no RequestInfo found in the context
E0908 13:17:48.439914 1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
E0908 13:17:48.441341 1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
W0908 13:17:48.455258 1 handler_proxy.go:93] no RequestInfo found in the context
E0908 13:17:48.455515 1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
==> kube-apiserver [ad5401098ad612028621c25bb73c63049b339fca6a311e78ef665de02be9a792] <==
I0908 13:23:15.301653 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
E0908 13:23:15.301681 1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0908 13:23:15.303580 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0908 13:24:13.882829 1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.6.154:443: connect: connection refused
I0908 13:24:13.882940 1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
W0908 13:24:15.301711 1 handler_proxy.go:93] no RequestInfo found in the context
E0908 13:24:15.301806 1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0908 13:24:15.301817 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0908 13:24:15.303703 1 handler_proxy.go:93] no RequestInfo found in the context
E0908 13:24:15.303736 1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0908 13:24:15.303744 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0908 13:25:13.882276 1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.6.154:443: connect: connection refused
I0908 13:25:13.882306 1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
I0908 13:26:13.882041 1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.6.154:443: connect: connection refused
I0908 13:26:13.882064 1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
W0908 13:26:15.302478 1 handler_proxy.go:93] no RequestInfo found in the context
E0908 13:26:15.302575 1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0908 13:26:15.302615 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0908 13:26:15.304598 1 handler_proxy.go:93] no RequestInfo found in the context
E0908 13:26:15.304621 1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0908 13:26:15.304627 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0908 13:27:13.881582 1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.6.154:443: connect: connection refused
I0908 13:27:13.881612 1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
==> kube-controller-manager [306f060aeefe68259f7a715c7e170802f56b0889bb02eba2839a448bbe10626f] <==
I0908 13:22:57.246892 1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
E0908 13:23:26.785060 1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
I0908 13:23:27.254584 1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
E0908 13:23:56.789309 1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
I0908 13:23:57.262703 1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
I0908 13:24:24.653925 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="79.03µs"
E0908 13:24:26.794297 1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
I0908 13:24:27.270995 1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
I0908 13:24:36.666539 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="107.402µs"
I0908 13:24:38.101098 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="186.604µs"
I0908 13:24:40.651496 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="102.324µs"
I0908 13:24:47.726405 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="125.092µs"
I0908 13:24:53.650028 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="80.621µs"
E0908 13:24:56.799501 1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
I0908 13:24:57.278466 1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
E0908 13:25:26.804593 1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
I0908 13:25:27.286560 1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
E0908 13:25:56.809678 1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
I0908 13:25:57.294456 1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
E0908 13:26:26.814787 1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
I0908 13:26:27.303324 1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
E0908 13:26:56.819241 1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
I0908 13:26:57.312066 1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
E0908 13:27:26.824102 1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
I0908 13:27:27.328339 1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
==> kube-controller-manager [cbd09fa5b3a5f2baa29480044435939c6661de8956c0144f35b364d38a9a8c5d] <==
I0908 13:17:06.905350 1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-d78mw"
I0908 13:17:06.961512 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="451.892045ms"
I0908 13:17:06.990983 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="29.413519ms"
I0908 13:17:06.991131 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.636µs"
I0908 13:17:07.009558 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.894µs"
I0908 13:17:07.066198 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.47µs"
I0908 13:17:08.290007 1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
I0908 13:17:08.327643 1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-l7qhx"
I0908 13:17:08.357540 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.022947ms"
I0908 13:17:08.370967 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.380063ms"
I0908 13:17:08.371362 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.453µs"
I0908 13:17:09.300479 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.878µs"
I0908 13:17:09.324970 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.488µs"
I0908 13:17:09.335279 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.751µs"
I0908 13:17:33.297910 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.014µs"
I0908 13:17:33.333907 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.272005ms"
I0908 13:17:33.334047 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.728µs"
I0908 13:17:48.277484 1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
I0908 13:17:48.293942 1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-57f55c9bc5-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
I0908 13:17:48.302667 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="26.010125ms"
E0908 13:17:48.302864 1 replica_set.go:557] sync "kube-system/metrics-server-57f55c9bc5" failed with pods "metrics-server-57f55c9bc5-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
I0908 13:17:48.338413 1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-ppxnd"
I0908 13:17:48.367949 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="65.027384ms"
I0908 13:17:48.389055 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="20.848675ms"
I0908 13:17:48.389382 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="83.682µs"
==> kube-proxy [105ff83200e38a12913faaecd2d0fb83a38b4d40ae898a992f24c5f0b7a7c61b] <==
I0908 13:18:16.237529 1 server_others.go:69] "Using iptables proxy"
I0908 13:18:16.293058 1 node.go:141] Successfully retrieved node IP: 192.168.76.2
I0908 13:18:16.409028 1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0908 13:18:16.410966 1 server_others.go:152] "Using iptables Proxier"
I0908 13:18:16.411007 1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
I0908 13:18:16.411015 1 server_others.go:438] "Defaulting to no-op detect-local"
I0908 13:18:16.411045 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0908 13:18:16.411252 1 server.go:846] "Version info" version="v1.28.0"
I0908 13:18:16.411262 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0908 13:18:16.415120 1 config.go:188] "Starting service config controller"
I0908 13:18:16.415145 1 shared_informer.go:311] Waiting for caches to sync for service config
I0908 13:18:16.415164 1 config.go:97] "Starting endpoint slice config controller"
I0908 13:18:16.415168 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0908 13:18:16.415590 1 config.go:315] "Starting node config controller"
I0908 13:18:16.415596 1 shared_informer.go:311] Waiting for caches to sync for node config
I0908 13:18:16.515704 1 shared_informer.go:318] Caches are synced for node config
I0908 13:18:16.515740 1 shared_informer.go:318] Caches are synced for service config
I0908 13:18:16.515781 1 shared_informer.go:318] Caches are synced for endpoint slice config
==> kube-proxy [ebc5022b0aeaa3ac29b4e9ce1ac124b836e51d29870a4e127105d359fce607b3] <==
I0908 13:17:07.582138 1 server_others.go:69] "Using iptables proxy"
I0908 13:17:07.625942 1 node.go:141] Successfully retrieved node IP: 192.168.76.2
I0908 13:17:07.698479 1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0908 13:17:07.704374 1 server_others.go:152] "Using iptables Proxier"
I0908 13:17:07.704422 1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
I0908 13:17:07.704431 1 server_others.go:438] "Defaulting to no-op detect-local"
I0908 13:17:07.704471 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0908 13:17:07.706455 1 server.go:846] "Version info" version="v1.28.0"
I0908 13:17:07.706479 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0908 13:17:07.707911 1 config.go:188] "Starting service config controller"
I0908 13:17:07.707928 1 shared_informer.go:311] Waiting for caches to sync for service config
I0908 13:17:07.707946 1 config.go:97] "Starting endpoint slice config controller"
I0908 13:17:07.707949 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0908 13:17:07.708351 1 config.go:315] "Starting node config controller"
I0908 13:17:07.708358 1 shared_informer.go:311] Waiting for caches to sync for node config
I0908 13:17:07.808233 1 shared_informer.go:318] Caches are synced for endpoint slice config
I0908 13:17:07.808322 1 shared_informer.go:318] Caches are synced for service config
I0908 13:17:07.808601 1 shared_informer.go:318] Caches are synced for node config
==> kube-scheduler [56c17c12d8122dd6d365bc92de07c71d041472f11a561084d79ef44eda4e026b] <==
W0908 13:16:50.798751 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0908 13:16:50.799375 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0908 13:16:50.798813 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0908 13:16:50.801342 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0908 13:16:50.799293 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0908 13:16:50.801760 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0908 13:16:50.800539 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0908 13:16:50.801790 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0908 13:16:50.801683 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0908 13:16:50.801806 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0908 13:16:51.675764 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0908 13:16:51.676019 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0908 13:16:51.710585 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0908 13:16:51.710807 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0908 13:16:51.813571 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0908 13:16:51.813612 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0908 13:16:51.868033 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0908 13:16:51.868069 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0908 13:16:51.930076 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0908 13:16:51.930623 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0908 13:16:51.965948 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0908 13:16:51.965986 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0908 13:16:52.116484 1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0908 13:16:52.116778 1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0908 13:16:53.983929 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [f99ef8a5289987ae2ab7840f3ca0c7298d3bca981189b327213d7ac0466ffddc] <==
W0908 13:18:14.287661 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0908 13:18:14.287683 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0908 13:18:14.287802 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0908 13:18:14.287821 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0908 13:18:14.287995 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0908 13:18:14.288017 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0908 13:18:14.288173 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0908 13:18:14.288194 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0908 13:18:14.288275 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0908 13:18:14.288294 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0908 13:18:14.293206 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0908 13:18:14.293247 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0908 13:18:14.293264 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0908 13:18:14.293272 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0908 13:18:14.293346 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0908 13:18:14.293357 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0908 13:18:14.293414 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0908 13:18:14.293423 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0908 13:18:14.301542 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0908 13:18:14.301611 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0908 13:18:14.302109 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0908 13:18:14.302132 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0908 13:18:14.302325 1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0908 13:18:14.302353 1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0908 13:18:15.862837 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 08 13:26:36 old-k8s-version-552634 kubelet[667]: E0908 13:26:36.637704 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
Sep 08 13:26:36 old-k8s-version-552634 kubelet[667]: E0908 13:26:36.638238 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
Sep 08 13:26:39 old-k8s-version-552634 kubelet[667]: I0908 13:26:39.636752 667 scope.go:117] "RemoveContainer" containerID="9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e"
Sep 08 13:26:39 old-k8s-version-552634 kubelet[667]: E0908 13:26:39.637173 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
Sep 08 13:26:49 old-k8s-version-552634 kubelet[667]: E0908 13:26:49.636933 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
Sep 08 13:26:49 old-k8s-version-552634 kubelet[667]: E0908 13:26:49.637233 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
Sep 08 13:26:50 old-k8s-version-552634 kubelet[667]: I0908 13:26:50.636633 667 scope.go:117] "RemoveContainer" containerID="9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e"
Sep 08 13:26:50 old-k8s-version-552634 kubelet[667]: E0908 13:26:50.637253 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
Sep 08 13:27:01 old-k8s-version-552634 kubelet[667]: E0908 13:27:01.637306 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
Sep 08 13:27:01 old-k8s-version-552634 kubelet[667]: E0908 13:27:01.638119 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
Sep 08 13:27:03 old-k8s-version-552634 kubelet[667]: I0908 13:27:03.636341 667 scope.go:117] "RemoveContainer" containerID="9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e"
Sep 08 13:27:03 old-k8s-version-552634 kubelet[667]: E0908 13:27:03.636673 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
Sep 08 13:27:14 old-k8s-version-552634 kubelet[667]: E0908 13:27:14.637238 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
Sep 08 13:27:15 old-k8s-version-552634 kubelet[667]: I0908 13:27:15.645713 667 scope.go:117] "RemoveContainer" containerID="9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e"
Sep 08 13:27:15 old-k8s-version-552634 kubelet[667]: E0908 13:27:15.646563 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
Sep 08 13:27:16 old-k8s-version-552634 kubelet[667]: E0908 13:27:16.638155 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
Sep 08 13:27:28 old-k8s-version-552634 kubelet[667]: E0908 13:27:28.637349 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
Sep 08 13:27:29 old-k8s-version-552634 kubelet[667]: I0908 13:27:29.636546 667 scope.go:117] "RemoveContainer" containerID="9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e"
Sep 08 13:27:29 old-k8s-version-552634 kubelet[667]: E0908 13:27:29.636854 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
Sep 08 13:27:29 old-k8s-version-552634 kubelet[667]: E0908 13:27:29.637445 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
Sep 08 13:27:40 old-k8s-version-552634 kubelet[667]: E0908 13:27:40.636743 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
Sep 08 13:27:42 old-k8s-version-552634 kubelet[667]: I0908 13:27:42.636751 667 scope.go:117] "RemoveContainer" containerID="9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e"
Sep 08 13:27:42 old-k8s-version-552634 kubelet[667]: E0908 13:27:42.637056 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
Sep 08 13:27:43 old-k8s-version-552634 kubelet[667]: E0908 13:27:43.637305 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
Sep 08 13:27:52 old-k8s-version-552634 kubelet[667]: E0908 13:27:52.643008 667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
==> storage-provisioner [0b74b004307ae8f29b60bbbe51b55dd3ea17fad6807bb10d9fdaede541bcaa19] <==
I0908 13:18:57.887092 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0908 13:18:57.915119 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0908 13:18:57.918502 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0908 13:19:15.317463 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0908 13:19:15.317887 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"20fbcd62-30a7-4d88-b856-ad9fb9fbe64d", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-552634_53f6aea1-7a70-4fd8-a67e-250e35f21845 became leader
I0908 13:19:15.317956 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-552634_53f6aea1-7a70-4fd8-a67e-250e35f21845!
I0908 13:19:15.418954 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-552634_53f6aea1-7a70-4fd8-a67e-250e35f21845!
==> storage-provisioner [66645eab9b879074e918236fe3987ab393e3cfbf8d3bc59ea2e30b38c88ef369] <==
I0908 13:18:15.931183 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0908 13:18:45.934234 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-552634 -n old-k8s-version-552634
helpers_test.go:269: (dbg) Run: kubectl --context old-k8s-version-552634 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-ppxnd kubernetes-dashboard-8694d4445c-2x25h
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context old-k8s-version-552634 describe pod metrics-server-57f55c9bc5-ppxnd kubernetes-dashboard-8694d4445c-2x25h
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-552634 describe pod metrics-server-57f55c9bc5-ppxnd kubernetes-dashboard-8694d4445c-2x25h: exit status 1 (80.161182ms)
** stderr **
Error from server (NotFound): pods "metrics-server-57f55c9bc5-ppxnd" not found
Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-2x25h" not found
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-552634 describe pod metrics-server-57f55c9bc5-ppxnd kubernetes-dashboard-8694d4445c-2x25h: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.65s)