=== RUN TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run: out/minikube-linux-amd64 pause -p embed-certs-594077 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-594077 --alsologtostderr -v=1: (1.640302545s)
start_stop_delete_test.go:309: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-594077 -n embed-certs-594077
E1213 09:34:18.852172 13307 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/functional-016924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-594077 -n embed-certs-594077: exit status 2 (15.799193726s)
-- stdout --
Stopped
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-594077 -n embed-certs-594077
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-594077 -n embed-certs-594077: exit status 2 (15.881375408s)
-- stdout --
Stopped
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run: out/minikube-linux-amd64 unpause -p embed-certs-594077 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-594077 -n embed-certs-594077
start_stop_delete_test.go:309: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-594077 -n embed-certs-594077
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-594077 -n embed-certs-594077
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p embed-certs-594077 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-594077 logs -n 25: (1.659566589s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ addons │ enable dashboard -p embed-certs-594077 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ embed-certs-594077 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ start │ -p embed-certs-594077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2 --kubernetes-version=v1.34.2 │ embed-certs-594077 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ image │ no-preload-616969 image list --format=json │ no-preload-616969 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ pause │ -p no-preload-616969 --alsologtostderr -v=1 │ no-preload-616969 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ unpause │ -p no-preload-616969 --alsologtostderr -v=1 │ no-preload-616969 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ delete │ -p no-preload-616969 │ no-preload-616969 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ delete │ -p no-preload-616969 │ no-preload-616969 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ start │ -p auto-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 │ auto-949855 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ │
│ addons │ enable metrics-server -p newest-cni-719997 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ stop │ -p newest-cni-719997 --alsologtostderr -v=3 │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ addons │ enable dashboard -p newest-cni-719997 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ start │ -p newest-cni-719997 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2 --kubernetes-version=v1.35.0-beta.0 │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:34 UTC │
│ image │ embed-certs-594077 image list --format=json │ embed-certs-594077 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ pause │ -p embed-certs-594077 --alsologtostderr -v=1 │ embed-certs-594077 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ addons │ enable metrics-server -p default-k8s-diff-port-018953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ stop │ -p default-k8s-diff-port-018953 --alsologtostderr -v=3 │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ image │ newest-cni-719997 image list --format=json │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ pause │ -p newest-cni-719997 --alsologtostderr -v=1 │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ unpause │ -p newest-cni-719997 --alsologtostderr -v=1 │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ delete │ -p newest-cni-719997 │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ delete │ -p newest-cni-719997 │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ start │ -p kindnet-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 │ kindnet-949855 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ │
│ addons │ enable dashboard -p default-k8s-diff-port-018953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ start │ -p default-k8s-diff-port-018953 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2 --kubernetes-version=v1.34.2 │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ │
│ unpause │ -p embed-certs-594077 --alsologtostderr -v=1 │ embed-certs-594077 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/13 09:34:44
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1213 09:34:44.133654 50144 out.go:360] Setting OutFile to fd 1 ...
I1213 09:34:44.133909 50144 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:34:44.133917 50144 out.go:374] Setting ErrFile to fd 2...
I1213 09:34:44.133921 50144 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:34:44.134131 50144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
I1213 09:34:44.134591 50144 out.go:368] Setting JSON to false
I1213 09:34:44.135680 50144 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4634,"bootTime":1765613850,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1213 09:34:44.135763 50144 start.go:143] virtualization: kvm guest
I1213 09:34:44.137725 50144 out.go:179] * [default-k8s-diff-port-018953] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1213 09:34:44.139291 50144 notify.go:221] Checking for updates...
I1213 09:34:44.139324 50144 out.go:179] - MINIKUBE_LOCATION=22128
I1213 09:34:44.141030 50144 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1213 09:34:44.142532 50144 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
I1213 09:34:44.145292 50144 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
I1213 09:34:44.146816 50144 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1213 09:34:44.148267 50144 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1213 09:34:44.150282 50144 config.go:182] Loaded profile config "default-k8s-diff-port-018953": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 09:34:44.150781 50144 driver.go:422] Setting default libvirt URI to qemu:///system
I1213 09:34:44.194033 50144 out.go:179] * Using the kvm2 driver based on existing profile
I1213 09:34:44.195572 50144 start.go:309] selected driver: kvm2
I1213 09:34:44.195598 50144 start.go:927] validating driver "kvm2" against &{Name:default-k8s-diff-port-018953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-018953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.59 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Li
stenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1213 09:34:44.195711 50144 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1213 09:34:44.196775 50144 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1213 09:34:44.196810 50144 cni.go:84] Creating CNI manager for ""
I1213 09:34:44.196896 50144 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1213 09:34:44.196958 50144 start.go:353] cluster config:
{Name:default-k8s-diff-port-018953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-018953 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.59 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1213 09:34:44.197055 50144 iso.go:125] acquiring lock: {Name:mka70bc7358d71723b0212976cce8aaa1cb0bc58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1213 09:34:44.198938 50144 out.go:179] * Starting "default-k8s-diff-port-018953" primary control-plane node in "default-k8s-diff-port-018953" cluster
I1213 09:34:42.777697 49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
I1213 09:34:42.778596 49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
I1213 09:34:42.778617 49982 main.go:143] libmachine: trying to list again with source=arp
I1213 09:34:42.779063 49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
I1213 09:34:42.779098 49982 retry.go:31] will retry after 1.16996515s: waiting for domain to come up
I1213 09:34:43.950913 49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
I1213 09:34:43.951731 49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
I1213 09:34:43.951754 49982 main.go:143] libmachine: trying to list again with source=arp
I1213 09:34:43.952220 49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
I1213 09:34:43.952273 49982 retry.go:31] will retry after 990.024449ms: waiting for domain to come up
I1213 09:34:44.943737 49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
I1213 09:34:44.944673 49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
I1213 09:34:44.944698 49982 main.go:143] libmachine: trying to list again with source=arp
I1213 09:34:44.945220 49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
I1213 09:34:44.945259 49982 retry.go:31] will retry after 1.213110356s: waiting for domain to come up
I1213 09:34:46.159702 49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
I1213 09:34:46.160662 49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
I1213 09:34:46.160685 49982 main.go:143] libmachine: trying to list again with source=arp
I1213 09:34:46.161142 49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
I1213 09:34:46.161190 49982 retry.go:31] will retry after 2.219294638s: waiting for domain to come up
W1213 09:34:45.255022 48864 pod_ready.go:104] pod "coredns-66bc5c9577-chjjw" is not "Ready", error: <nil>
W1213 09:34:47.754969 48864 pod_ready.go:104] pod "coredns-66bc5c9577-chjjw" is not "Ready", error: <nil>
I1213 09:34:44.200532 50144 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
I1213 09:34:44.200573 50144 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-9390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
I1213 09:34:44.200590 50144 cache.go:65] Caching tarball of preloaded images
I1213 09:34:44.200687 50144 preload.go:238] Found /home/jenkins/minikube-integration/22128-9390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1213 09:34:44.200700 50144 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
I1213 09:34:44.200800 50144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/default-k8s-diff-port-018953/config.json ...
I1213 09:34:44.201085 50144 start.go:360] acquireMachinesLock for default-k8s-diff-port-018953: {Name:mk5011dd8641588b44f3b8805193aca1c9f0973f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
==> Docker <==
Dec 13 09:33:54 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:33:54.656562567Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Dec 13 09:33:54 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:33:54.656735001Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Dec 13 09:33:54 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:33:54Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
Dec 13 09:33:54 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:33:54.873702256Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Dec 13 09:34:03 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:03Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.467828020Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.540651785Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.542156265Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Dec 13 09:34:09 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:09Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.567348185Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.567521379Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.575597440Z" level=error msg="unexpected HTTP error handling" error="<nil>"
Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.575676593Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Dec 13 09:34:17 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:17.140741400Z" level=error msg="Handler for POST /v1.51/containers/de05857e10ed/pause returned error: cannot pause container de05857e10ed14d338591b8d140c8fdbffcc13e5cdf3dc4d04b3f6eabfd47af5: OCI runtime pause failed: container not running"
Dec 13 09:34:17 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:17.193091447Z" level=info msg="ignoring event" container=de05857e10ed14d338591b8d140c8fdbffcc13e5cdf3dc4d04b3f6eabfd47af5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 13 09:34:50 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-pg6d8_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"88f1a58b376611f492c5b508834009cd114167f31ab62ec3d85fc7744f5c10b4\""
Dec 13 09:34:51 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:51Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.000059166Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.107124399Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.107248417Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Dec 13 09:34:52 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:52Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.140813005Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.140880277Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.152224216Z" level=error msg="unexpected HTTP error handling" error="<nil>"
Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.152379674Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
2f643a44c0947 6e38f40d628db 1 second ago Running storage-provisioner 2 f0fe97ebd2fa8 storage-provisioner kube-system
9d744ae1656d5 kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 49 seconds ago Running kubernetes-dashboard 0 655aad46b16e4 kubernetes-dashboard-855c9754f9-5ckvx kubernetes-dashboard
e383d4e28bee5 56cc512116c8f 58 seconds ago Running busybox 1 88fedb324336f busybox default
3500352ae1887 52546a367cc9e 58 seconds ago Running coredns 1 02321cceca25c coredns-66bc5c9577-sbl6b kube-system
de05857e10ed1 6e38f40d628db About a minute ago Exited storage-provisioner 1 f0fe97ebd2fa8 storage-provisioner kube-system
3b9abac9a0e5e 8aa150647e88a About a minute ago Running kube-proxy 1 0185479b8f1ac kube-proxy-gbh4v kube-system
652f8878d5fe5 a3e246e9556e9 About a minute ago Running etcd 1 06277dacc9521 etcd-embed-certs-594077 kube-system
8cac4fb329021 88320b5498ff2 About a minute ago Running kube-scheduler 1 a72c06cffcc53 kube-scheduler-embed-certs-594077 kube-system
bcf2fd0416777 01e8bacf0f500 About a minute ago Running kube-controller-manager 1 064f32bea94a2 kube-controller-manager-embed-certs-594077 kube-system
ea6f4d67228a1 a5f569d49a979 About a minute ago Running kube-apiserver 1 d5b4d42f70f7a kube-apiserver-embed-certs-594077 kube-system
ceb2c2191e490 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 2 minutes ago Exited busybox 0 1082bf842642a busybox default
a2c91c9fb48e6 52546a367cc9e 2 minutes ago Exited coredns 0 299749fc58f7b coredns-66bc5c9577-sbl6b kube-system
08fadc68f466b 8aa150647e88a 2 minutes ago Exited kube-proxy 0 acc0a3cff3053 kube-proxy-gbh4v kube-system
6e6c8e89a43c7 a5f569d49a979 3 minutes ago Exited kube-apiserver 0 3f64649de4057 kube-apiserver-embed-certs-594077 kube-system
d6604faaddf3f a3e246e9556e9 3 minutes ago Exited etcd 0 45afc8f5a4c50 etcd-embed-certs-594077 kube-system
4b2a5a8f531e3 01e8bacf0f500 3 minutes ago Exited kube-controller-manager 0 b2be7e1ac613b kube-controller-manager-embed-certs-594077 kube-system
cf9e0b0dcbf9b 88320b5498ff2 3 minutes ago Exited kube-scheduler 0 356edcdb1aadc kube-scheduler-embed-certs-594077 kube-system
==> coredns [3500352ae188] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:52332 - 31954 "HINFO IN 7552130428793522761.6479760196523847134. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.112161133s
==> coredns [a2c91c9fb48e] <==
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
[INFO] Reloading complete
[INFO] 127.0.0.1:36208 - 41315 "HINFO IN 1358106524289017339.4675404298798629450. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.043234961s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: embed-certs-594077
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=embed-certs-594077
kubernetes.io/os=linux
minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
minikube.k8s.io/name=embed-certs-594077
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_13T09_31_49_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 13 Dec 2025 09:31:45 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: embed-certs-594077
AcquireTime: <unset>
RenewTime: Sat, 13 Dec 2025 09:34:50 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 13 Dec 2025 09:34:51 +0000 Sat, 13 Dec 2025 09:31:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 13 Dec 2025 09:34:51 +0000 Sat, 13 Dec 2025 09:31:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 13 Dec 2025 09:34:51 +0000 Sat, 13 Dec 2025 09:31:43 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 13 Dec 2025 09:34:51 +0000 Sat, 13 Dec 2025 09:33:49 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.5
Hostname: embed-certs-594077
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3035908Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3035908Ki
pods: 110
System Info:
Machine ID: 3f9ed15ee5214a3682f9a8b37f59f7e2
System UUID: 3f9ed15e-e521-4a36-82f9-a8b37f59f7e2
Boot ID: 5905dae6-5187-479b-bc88-9a3ad2e0e23b
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://28.5.2
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m16s
kube-system coredns-66bc5c9577-sbl6b 100m (5%) 0 (0%) 70Mi (2%) 170Mi (5%) 2m58s
kube-system etcd-embed-certs-594077 100m (5%) 0 (0%) 100Mi (3%) 0 (0%) 3m4s
kube-system kube-apiserver-embed-certs-594077 250m (12%) 0 (0%) 0 (0%) 0 (0%) 3m4s
kube-system kube-controller-manager-embed-certs-594077 200m (10%) 0 (0%) 0 (0%) 0 (0%) 3m4s
kube-system kube-proxy-gbh4v 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m58s
kube-system kube-scheduler-embed-certs-594077 100m (5%) 0 (0%) 0 (0%) 0 (0%) 3m4s
kube-system metrics-server-746fcd58dc-r9qzb 100m (5%) 0 (0%) 200Mi (6%) 0 (0%) 2m5s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m54s
kubernetes-dashboard dashboard-metrics-scraper-6ffb444bf9-42zcv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 61s
kubernetes-dashboard kubernetes-dashboard-855c9754f9-5ckvx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 61s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 370Mi (12%) 170Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m55s kube-proxy
Normal Starting 65s kube-proxy
Normal Starting 3m12s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m12s (x8 over 3m12s) kubelet Node embed-certs-594077 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m12s (x8 over 3m12s) kubelet Node embed-certs-594077 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m12s (x7 over 3m12s) kubelet Node embed-certs-594077 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m12s kubelet Updated Node Allocatable limit across pods
Normal Starting 3m4s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 3m4s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 3m4s kubelet Node embed-certs-594077 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m4s kubelet Node embed-certs-594077 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m4s kubelet Node embed-certs-594077 status is now: NodeHasSufficientPID
Normal NodeReady 3m kubelet Node embed-certs-594077 status is now: NodeReady
Normal RegisteredNode 2m59s node-controller Node embed-certs-594077 event: Registered Node embed-certs-594077 in Controller
Normal Starting 74s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 74s (x8 over 74s) kubelet Node embed-certs-594077 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 74s (x8 over 74s) kubelet Node embed-certs-594077 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 74s (x7 over 74s) kubelet Node embed-certs-594077 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 74s kubelet Updated Node Allocatable limit across pods
Warning Rebooted 68s kubelet Node embed-certs-594077 has been rebooted, boot id: 5905dae6-5187-479b-bc88-9a3ad2e0e23b
Normal RegisteredNode 62s node-controller Node embed-certs-594077 event: Registered Node embed-certs-594077 in Controller
Normal Starting 2s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 2s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 1s kubelet Node embed-certs-594077 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 1s kubelet Node embed-certs-594077 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 1s kubelet Node embed-certs-594077 status is now: NodeHasSufficientPID
==> dmesg <==
[Dec13 09:33] Booted with the nomodeset parameter. Only the system framebuffer will be available
[ +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
[ +0.001642] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +0.004177] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
[ +0.886210] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000026] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +0.119341] kauditd_printk_skb: 1 callbacks suppressed
[ +0.139009] kauditd_printk_skb: 421 callbacks suppressed
[ +8.056256] kauditd_printk_skb: 193 callbacks suppressed
[ +2.474062] kauditd_printk_skb: 128 callbacks suppressed
[ +0.838025] kauditd_printk_skb: 259 callbacks suppressed
[Dec13 09:34] kauditd_printk_skb: 2 callbacks suppressed
[ +0.277121] kauditd_printk_skb: 11 callbacks suppressed
[ +0.213014] kauditd_printk_skb: 35 callbacks suppressed
==> etcd [652f8878d5fe] <==
{"level":"warn","ts":"2025-12-13T09:33:42.931665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:42.982941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53938","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.000194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53944","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.029221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53970","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.044582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53974","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.070524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53996","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.114197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54022","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.160976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54040","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.185685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54060","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.223885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54098","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.236727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54120","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.245766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54082","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.265875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54136","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.278357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.291236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54174","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.309255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54196","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.331705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54224","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.390604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54242","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.413099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54254","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.484942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54256","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.540577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54264","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.562762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54284","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.586381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54300","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.605698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54320","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.726718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54340","server-name":"","error":"EOF"}
==> etcd [d6604faaddf3] <==
{"level":"warn","ts":"2025-12-13T09:31:43.869789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51370","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:31:43.901134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51388","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:31:43.939127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51400","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:31:43.963838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51414","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:31:43.987753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51430","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:31:44.019434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51448","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:31:44.214190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51468","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-12-13T09:32:49.290228Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-12-13T09:32:49.290316Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"embed-certs-594077","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"]}
{"level":"error","ts":"2025-12-13T09:32:49.290413Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-12-13T09:32:56.297824Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-12-13T09:32:56.297931Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-13T09:32:56.297953Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c5263387c79c0223","current-leader-member-id":"c5263387c79c0223"}
{"level":"info","ts":"2025-12-13T09:32:56.298053Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
{"level":"info","ts":"2025-12-13T09:32:56.298064Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"warn","ts":"2025-12-13T09:32:56.298487Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
{"level":"warn","ts":"2025-12-13T09:32:56.298533Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
{"level":"error","ts":"2025-12-13T09:32:56.298541Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.5:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"warn","ts":"2025-12-13T09:32:56.300679Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-12-13T09:32:56.300980Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"error","ts":"2025-12-13T09:32:56.301169Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-13T09:32:56.471124Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.5:2380"}
{"level":"error","ts":"2025-12-13T09:32:56.471216Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.5:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-13T09:32:56.471279Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.5:2380"}
{"level":"info","ts":"2025-12-13T09:32:56.471291Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"embed-certs-594077","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"]}
==> kernel <==
09:34:52 up 1 min, 0 users, load average: 1.24, 0.63, 0.24
Linux embed-certs-594077 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [6e6c8e89a43c] <==
W1213 09:32:58.579780 1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.586497 1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.612833 1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.628739 1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.634886 1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.673263 1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.687212 1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.757751 1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.783556 1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.793183 1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.801878 1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.893696 1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.903550 1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.951767 1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.041688 1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.050000 1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.059710 1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.112782 1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.136210 1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.177829 1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.229503 1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.241149 1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.252494 1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.257821 1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.277534 1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
==> kube-apiserver [ea6f4d67228a] <==
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I1213 09:33:45.920199 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1213 09:33:48.019787 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
W1213 09:33:48.110189 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.5]
I1213 09:33:48.112210 1 controller.go:667] quota admission added evaluator for: endpoints
I1213 09:33:48.682383 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1213 09:33:48.768099 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1213 09:33:48.837638 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1213 09:33:48.853781 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1213 09:33:50.490654 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I1213 09:33:50.491397 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1213 09:33:51.267285 1 controller.go:667] quota admission added evaluator for: namespaces
I1213 09:33:52.310927 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.138.136"}
I1213 09:33:52.369329 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.120.51"}
W1213 09:34:49.882399 1 handler_proxy.go:99] no RequestInfo found in the context
E1213 09:34:49.882606 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I1213 09:34:49.882638 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W1213 09:34:49.888307 1 handler_proxy.go:99] no RequestInfo found in the context
E1213 09:34:49.888364 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
I1213 09:34:49.888377 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
==> kube-controller-manager [4b2a5a8f531e] <==
I1213 09:31:53.336039 1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
I1213 09:31:53.336048 1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
I1213 09:31:53.336056 1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
I1213 09:31:53.343453 1 shared_informer.go:356] "Caches are synced" controller="service account"
I1213 09:31:53.353435 1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
I1213 09:31:53.355442 1 shared_informer.go:356] "Caches are synced" controller="stateful set"
I1213 09:31:53.355525 1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
I1213 09:31:53.357255 1 shared_informer.go:356] "Caches are synced" controller="GC"
I1213 09:31:53.358433 1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-594077" podCIDRs=["10.244.0.0/24"]
I1213 09:31:53.360081 1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
I1213 09:31:53.360496 1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
I1213 09:31:53.358818 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1213 09:31:53.361073 1 shared_informer.go:356] "Caches are synced" controller="namespace"
I1213 09:31:53.361384 1 shared_informer.go:356] "Caches are synced" controller="expand"
I1213 09:31:53.358828 1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
I1213 09:31:53.361985 1 shared_informer.go:356] "Caches are synced" controller="attach detach"
I1213 09:31:53.362214 1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
I1213 09:31:53.363737 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1213 09:31:53.363994 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1213 09:31:53.364667 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I1213 09:31:53.370811 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
I1213 09:31:53.370831 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1213 09:31:53.370965 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1213 09:31:53.370971 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I1213 09:31:53.387013 1 shared_informer.go:356] "Caches are synced" controller="PV protection"
==> kube-controller-manager [bcf2fd041677] <==
I1213 09:33:50.374959 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
I1213 09:33:50.374993 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
I1213 09:33:50.375011 1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
I1213 09:33:50.375874 1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
I1213 09:33:50.403422 1 shared_informer.go:356] "Caches are synced" controller="attach detach"
I1213 09:33:50.396647 1 shared_informer.go:356] "Caches are synced" controller="PV protection"
I1213 09:33:50.396659 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1213 09:33:50.420341 1 shared_informer.go:356] "Caches are synced" controller="stateful set"
I1213 09:33:50.435314 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1213 09:33:50.442414 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1213 09:33:50.443866 1 shared_informer.go:356] "Caches are synced" controller="endpoint"
I1213 09:33:50.450389 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1213 09:33:50.450482 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1213 09:33:50.450491 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
E1213 09:33:51.616002 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:33:51.697375 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:33:51.734843 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:33:51.784632 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:33:51.793970 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:33:51.832688 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:33:51.832688 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:33:51.851357 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:33:51.871143 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:34:50.051560 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I1213 09:34:50.064726 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
==> kube-proxy [08fadc68f466] <==
I1213 09:31:56.508637 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1213 09:31:56.609891 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1213 09:31:56.609953 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.5"]
E1213 09:31:56.610205 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1213 09:31:56.804733 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1213 09:31:56.804847 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1213 09:31:56.804896 1 server_linux.go:132] "Using iptables Proxier"
I1213 09:31:56.865819 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1213 09:31:56.878175 1 server.go:527] "Version info" version="v1.34.2"
I1213 09:31:56.879530 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1213 09:31:56.898538 1 config.go:200] "Starting service config controller"
I1213 09:31:56.898916 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1213 09:31:56.899076 1 config.go:106] "Starting endpoint slice config controller"
I1213 09:31:56.899279 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1213 09:31:56.899942 1 config.go:309] "Starting node config controller"
I1213 09:31:56.900469 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1213 09:31:56.900662 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1213 09:31:56.906769 1 config.go:403] "Starting serviceCIDR config controller"
I1213 09:31:56.908443 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1213 09:31:57.000158 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1213 09:31:57.001827 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1213 09:31:57.009299 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-proxy [3b9abac9a0e5] <==
I1213 09:33:47.146798 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1213 09:33:47.248054 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1213 09:33:47.248124 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.5"]
E1213 09:33:47.248660 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1213 09:33:47.305149 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1213 09:33:47.305245 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1213 09:33:47.305313 1 server_linux.go:132] "Using iptables Proxier"
I1213 09:33:47.321084 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1213 09:33:47.321935 1 server.go:527] "Version info" version="v1.34.2"
I1213 09:33:47.321978 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1213 09:33:47.329818 1 config.go:309] "Starting node config controller"
I1213 09:33:47.329864 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1213 09:33:47.329872 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1213 09:33:47.330417 1 config.go:200] "Starting service config controller"
I1213 09:33:47.330506 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1213 09:33:47.330530 1 config.go:106] "Starting endpoint slice config controller"
I1213 09:33:47.330533 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1213 09:33:47.330543 1 config.go:403] "Starting serviceCIDR config controller"
I1213 09:33:47.330546 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1213 09:33:47.431478 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1213 09:33:47.431478 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1213 09:33:47.431538 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [8cac4fb32902] <==
I1213 09:33:44.776239 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1213 09:33:44.788584 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1213 09:33:44.789576 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1213 09:33:44.793480 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1213 09:33:44.793495 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
E1213 09:33:44.834891 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1213 09:33:44.835669 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1213 09:33:44.836791 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1213 09:33:44.836887 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1213 09:33:44.836961 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1213 09:33:44.837040 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1213 09:33:44.837096 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1213 09:33:44.837155 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1213 09:33:44.837891 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1213 09:33:44.838140 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1213 09:33:44.838370 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1213 09:33:44.838427 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1213 09:33:44.838516 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1213 09:33:44.838546 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1213 09:33:44.838598 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1213 09:33:44.840424 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1213 09:33:44.840886 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1213 09:33:44.842576 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1213 09:33:44.842610 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
I1213 09:33:46.493666 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kube-scheduler [cf9e0b0dcbf9] <==
E1213 09:31:45.673059 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1213 09:31:45.672922 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1213 09:31:45.674367 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1213 09:31:45.674656 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1213 09:31:45.675121 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1213 09:31:45.675359 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1213 09:31:46.593861 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1213 09:31:46.620372 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1213 09:31:46.637400 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1213 09:31:46.671885 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1213 09:31:46.675254 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1213 09:31:46.679901 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1213 09:31:46.741636 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1213 09:31:46.785658 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1213 09:31:46.807123 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1213 09:31:46.817797 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1213 09:31:46.982287 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1213 09:31:47.004930 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1213 09:31:47.032549 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
I1213 09:31:50.148815 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1213 09:32:49.158251 1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
I1213 09:32:49.158342 1 server.go:263] "[graceful-termination] secure server has stopped listening"
I1213 09:32:49.158389 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
I1213 09:32:49.158466 1 server.go:265] "[graceful-termination] secure server is exiting"
E1213 09:32:49.158499 1 run.go:72] "command failed" err="finished without leader elect"
==> kubelet <==
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.476932 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-ca-certs\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.476983 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-flexvolume-dir\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477010 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2791db0168e112ad1ebb49d47ad7acc4-kubeconfig\") pod \"kube-scheduler-embed-certs-594077\" (UID: \"2791db0168e112ad1ebb49d47ad7acc4\") " pod="kube-system/kube-scheduler-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477028 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/37016cb987c6e71339040b3541624dd3-etcd-certs\") pod \"etcd-embed-certs-594077\" (UID: \"37016cb987c6e71339040b3541624dd3\") " pod="kube-system/etcd-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477043 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fab6b178c44aca2623e51c535287727e-k8s-certs\") pod \"kube-apiserver-embed-certs-594077\" (UID: \"fab6b178c44aca2623e51c535287727e\") " pod="kube-system/kube-apiserver-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477057 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fab6b178c44aca2623e51c535287727e-usr-share-ca-certificates\") pod \"kube-apiserver-embed-certs-594077\" (UID: \"fab6b178c44aca2623e51c535287727e\") " pod="kube-system/kube-apiserver-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477075 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-k8s-certs\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477090 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-kubeconfig\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477105 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-usr-share-ca-certificates\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477118 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/37016cb987c6e71339040b3541624dd3-etcd-data\") pod \"etcd-embed-certs-594077\" (UID: \"37016cb987c6e71339040b3541624dd3\") " pod="kube-system/etcd-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477149 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fab6b178c44aca2623e51c535287727e-ca-certs\") pod \"kube-apiserver-embed-certs-594077\" (UID: \"fab6b178c44aca2623e51c535287727e\") " pod="kube-system/kube-apiserver-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.552903 4246 apiserver.go:52] "Watching apiserver"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.598581 4246 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.678949 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c24123e2-281a-4e07-83eb-bf2a70ed9689-lib-modules\") pod \"kube-proxy-gbh4v\" (UID: \"c24123e2-281a-4e07-83eb-bf2a70ed9689\") " pod="kube-system/kube-proxy-gbh4v"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.679096 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c24123e2-281a-4e07-83eb-bf2a70ed9689-xtables-lock\") pod \"kube-proxy-gbh4v\" (UID: \"c24123e2-281a-4e07-83eb-bf2a70ed9689\") " pod="kube-system/kube-proxy-gbh4v"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.679116 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1db9cb1e-bc7a-4d9f-9042-936fcad750f7-tmp\") pod \"storage-provisioner\" (UID: \"1db9cb1e-bc7a-4d9f-9042-936fcad750f7\") " pod="kube-system/storage-provisioner"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.864587 4246 scope.go:117] "RemoveContainer" containerID="de05857e10ed14d338591b8d140c8fdbffcc13e5cdf3dc4d04b3f6eabfd47af5"
Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.113944 4246 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.114004 4246 kuberuntime_image.go:43] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.114210 4246 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-6ffb444bf9-42zcv_kubernetes-dashboard(7123ef17-ca61-4aa4-a10e-b29ec51a6667): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.114247 4246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-42zcv" podUID="7123ef17-ca61-4aa4-a10e-b29ec51a6667"
Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.153914 4246 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.154760 4246 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.154958 4246 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-r9qzb_kube-system(25f7da03-5692-48c6-8b6e-22b84e1aec43): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" logger="UnhandledError"
Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.155038 4246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-r9qzb" podUID="25f7da03-5692-48c6-8b6e-22b84e1aec43"
==> kubernetes-dashboard [9d744ae1656d] <==
2025/12/13 09:34:04 Starting overwatch
2025/12/13 09:34:04 Using namespace: kubernetes-dashboard
2025/12/13 09:34:04 Using in-cluster config to connect to apiserver
2025/12/13 09:34:04 Using secret token for csrf signing
2025/12/13 09:34:04 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/12/13 09:34:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/12/13 09:34:04 Successful initial request to the apiserver, version: v1.34.2
2025/12/13 09:34:04 Generating JWE encryption key
2025/12/13 09:34:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/12/13 09:34:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/12/13 09:34:04 Initializing JWE encryption key from synchronized object
2025/12/13 09:34:04 Creating in-cluster Sidecar client
2025/12/13 09:34:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/12/13 09:34:04 Serving insecurely on HTTP port: 9090
2025/12/13 09:34:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [2f643a44c094] <==
I1213 09:34:52.258732 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1213 09:34:52.299738 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1213 09:34:52.300284 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
W1213 09:34:52.305497 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [de05857e10ed] <==
I1213 09:33:46.911515 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1213 09:34:16.919520 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-594077 -n embed-certs-594077
helpers_test.go:270: (dbg) Run: kubectl --context embed-certs-594077 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-r9qzb dashboard-metrics-scraper-6ffb444bf9-42zcv
helpers_test.go:283: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run: kubectl --context embed-certs-594077 describe pod metrics-server-746fcd58dc-r9qzb dashboard-metrics-scraper-6ffb444bf9-42zcv
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context embed-certs-594077 describe pod metrics-server-746fcd58dc-r9qzb dashboard-metrics-scraper-6ffb444bf9-42zcv: exit status 1 (66.721003ms)
** stderr **
Error from server (NotFound): pods "metrics-server-746fcd58dc-r9qzb" not found
Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-42zcv" not found
** /stderr **
helpers_test.go:288: kubectl --context embed-certs-594077 describe pod metrics-server-746fcd58dc-r9qzb dashboard-metrics-scraper-6ffb444bf9-42zcv: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-594077 -n embed-certs-594077
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p embed-certs-594077 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-594077 logs -n 25: (1.308832085s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ addons │ enable dashboard -p embed-certs-594077 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ embed-certs-594077 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ start │ -p embed-certs-594077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2 --kubernetes-version=v1.34.2 │ embed-certs-594077 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ image │ no-preload-616969 image list --format=json │ no-preload-616969 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ pause │ -p no-preload-616969 --alsologtostderr -v=1 │ no-preload-616969 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ unpause │ -p no-preload-616969 --alsologtostderr -v=1 │ no-preload-616969 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ delete │ -p no-preload-616969 │ no-preload-616969 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ delete │ -p no-preload-616969 │ no-preload-616969 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ start │ -p auto-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 │ auto-949855 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ │
│ addons │ enable metrics-server -p newest-cni-719997 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ stop │ -p newest-cni-719997 --alsologtostderr -v=3 │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ addons │ enable dashboard -p newest-cni-719997 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:33 UTC │
│ start │ -p newest-cni-719997 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2 --kubernetes-version=v1.35.0-beta.0 │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:33 UTC │ 13 Dec 25 09:34 UTC │
│ image │ embed-certs-594077 image list --format=json │ embed-certs-594077 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ pause │ -p embed-certs-594077 --alsologtostderr -v=1 │ embed-certs-594077 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ addons │ enable metrics-server -p default-k8s-diff-port-018953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ stop │ -p default-k8s-diff-port-018953 --alsologtostderr -v=3 │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ image │ newest-cni-719997 image list --format=json │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ pause │ -p newest-cni-719997 --alsologtostderr -v=1 │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ unpause │ -p newest-cni-719997 --alsologtostderr -v=1 │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ delete │ -p newest-cni-719997 │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ delete │ -p newest-cni-719997 │ newest-cni-719997 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ start │ -p kindnet-949855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 │ kindnet-949855 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ │
│ addons │ enable dashboard -p default-k8s-diff-port-018953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
│ start │ -p default-k8s-diff-port-018953 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2 --kubernetes-version=v1.34.2 │ default-k8s-diff-port-018953 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ │
│ unpause │ -p embed-certs-594077 --alsologtostderr -v=1 │ embed-certs-594077 │ jenkins │ v1.37.0 │ 13 Dec 25 09:34 UTC │ 13 Dec 25 09:34 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/13 09:34:44
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1213 09:34:44.133654 50144 out.go:360] Setting OutFile to fd 1 ...
I1213 09:34:44.133909 50144 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:34:44.133917 50144 out.go:374] Setting ErrFile to fd 2...
I1213 09:34:44.133921 50144 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:34:44.134131 50144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-9390/.minikube/bin
I1213 09:34:44.134591 50144 out.go:368] Setting JSON to false
I1213 09:34:44.135680 50144 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4634,"bootTime":1765613850,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1213 09:34:44.135763 50144 start.go:143] virtualization: kvm guest
I1213 09:34:44.137725 50144 out.go:179] * [default-k8s-diff-port-018953] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1213 09:34:44.139291 50144 notify.go:221] Checking for updates...
I1213 09:34:44.139324 50144 out.go:179] - MINIKUBE_LOCATION=22128
I1213 09:34:44.141030 50144 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1213 09:34:44.142532 50144 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22128-9390/kubeconfig
I1213 09:34:44.145292 50144 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-9390/.minikube
I1213 09:34:44.146816 50144 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1213 09:34:44.148267 50144 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1213 09:34:44.150282 50144 config.go:182] Loaded profile config "default-k8s-diff-port-018953": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 09:34:44.150781 50144 driver.go:422] Setting default libvirt URI to qemu:///system
I1213 09:34:44.194033 50144 out.go:179] * Using the kvm2 driver based on existing profile
I1213 09:34:44.195572 50144 start.go:309] selected driver: kvm2
I1213 09:34:44.195598 50144 start.go:927] validating driver "kvm2" against &{Name:default-k8s-diff-port-018953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-018953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.59 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Li
stenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1213 09:34:44.195711 50144 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1213 09:34:44.196775 50144 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1213 09:34:44.196810 50144 cni.go:84] Creating CNI manager for ""
I1213 09:34:44.196896 50144 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1213 09:34:44.196958 50144 start.go:353] cluster config:
{Name:default-k8s-diff-port-018953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-018953 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.59 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1213 09:34:44.197055 50144 iso.go:125] acquiring lock: {Name:mka70bc7358d71723b0212976cce8aaa1cb0bc58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1213 09:34:44.198938 50144 out.go:179] * Starting "default-k8s-diff-port-018953" primary control-plane node in "default-k8s-diff-port-018953" cluster
I1213 09:34:42.777697 49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
I1213 09:34:42.778596 49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
I1213 09:34:42.778617 49982 main.go:143] libmachine: trying to list again with source=arp
I1213 09:34:42.779063 49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
I1213 09:34:42.779098 49982 retry.go:31] will retry after 1.16996515s: waiting for domain to come up
I1213 09:34:43.950913 49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
I1213 09:34:43.951731 49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
I1213 09:34:43.951754 49982 main.go:143] libmachine: trying to list again with source=arp
I1213 09:34:43.952220 49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
I1213 09:34:43.952273 49982 retry.go:31] will retry after 990.024449ms: waiting for domain to come up
I1213 09:34:44.943737 49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
I1213 09:34:44.944673 49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
I1213 09:34:44.944698 49982 main.go:143] libmachine: trying to list again with source=arp
I1213 09:34:44.945220 49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
I1213 09:34:44.945259 49982 retry.go:31] will retry after 1.213110356s: waiting for domain to come up
I1213 09:34:46.159702 49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
I1213 09:34:46.160662 49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
I1213 09:34:46.160685 49982 main.go:143] libmachine: trying to list again with source=arp
I1213 09:34:46.161142 49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
I1213 09:34:46.161190 49982 retry.go:31] will retry after 2.219294638s: waiting for domain to come up
W1213 09:34:45.255022 48864 pod_ready.go:104] pod "coredns-66bc5c9577-chjjw" is not "Ready", error: <nil>
W1213 09:34:47.754969 48864 pod_ready.go:104] pod "coredns-66bc5c9577-chjjw" is not "Ready", error: <nil>
I1213 09:34:44.200532 50144 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
I1213 09:34:44.200573 50144 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-9390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
I1213 09:34:44.200590 50144 cache.go:65] Caching tarball of preloaded images
I1213 09:34:44.200687 50144 preload.go:238] Found /home/jenkins/minikube-integration/22128-9390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1213 09:34:44.200700 50144 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
I1213 09:34:44.200800 50144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-9390/.minikube/profiles/default-k8s-diff-port-018953/config.json ...
I1213 09:34:44.201085 50144 start.go:360] acquireMachinesLock for default-k8s-diff-port-018953: {Name:mk5011dd8641588b44f3b8805193aca1c9f0973f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1213 09:34:48.382833 49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
I1213 09:34:48.383965 49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
I1213 09:34:48.383989 49982 main.go:143] libmachine: trying to list again with source=arp
I1213 09:34:48.384580 49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
I1213 09:34:48.384618 49982 retry.go:31] will retry after 2.900119926s: waiting for domain to come up
I1213 09:34:51.288687 49982 main.go:143] libmachine: domain kindnet-949855 has defined MAC address 52:54:00:35:93:4c in network mk-kindnet-949855
I1213 09:34:51.290269 49982 main.go:143] libmachine: no network interface addresses found for domain kindnet-949855 (source=lease)
I1213 09:34:51.290294 49982 main.go:143] libmachine: trying to list again with source=arp
I1213 09:34:51.290800 49982 main.go:143] libmachine: unable to find current IP address of domain kindnet-949855 in network mk-kindnet-949855 (interfaces detected: [])
I1213 09:34:51.290844 49982 retry.go:31] will retry after 2.549669485s: waiting for domain to come up
W1213 09:34:50.253513 48864 pod_ready.go:104] pod "coredns-66bc5c9577-chjjw" is not "Ready", error: <nil>
W1213 09:34:52.255803 48864 pod_ready.go:104] pod "coredns-66bc5c9577-chjjw" is not "Ready", error: <nil>
==> Docker <==
Dec 13 09:33:54 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:33:54.656562567Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Dec 13 09:33:54 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:33:54.656735001Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Dec 13 09:33:54 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:33:54Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
Dec 13 09:33:54 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:33:54.873702256Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Dec 13 09:34:03 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:03Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.467828020Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.540651785Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.542156265Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Dec 13 09:34:09 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:09Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.567348185Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.567521379Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.575597440Z" level=error msg="unexpected HTTP error handling" error="<nil>"
Dec 13 09:34:09 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:09.575676593Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Dec 13 09:34:17 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:17.140741400Z" level=error msg="Handler for POST /v1.51/containers/de05857e10ed/pause returned error: cannot pause container de05857e10ed14d338591b8d140c8fdbffcc13e5cdf3dc4d04b3f6eabfd47af5: OCI runtime pause failed: container not running"
Dec 13 09:34:17 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:17.193091447Z" level=info msg="ignoring event" container=de05857e10ed14d338591b8d140c8fdbffcc13e5cdf3dc4d04b3f6eabfd47af5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 13 09:34:50 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-pg6d8_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"88f1a58b376611f492c5b508834009cd114167f31ab62ec3d85fc7744f5c10b4\""
Dec 13 09:34:51 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:51Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.000059166Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.107124399Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.107248417Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Dec 13 09:34:52 embed-certs-594077 cri-dockerd[1561]: time="2025-12-13T09:34:52Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.140813005Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.140880277Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.152224216Z" level=error msg="unexpected HTTP error handling" error="<nil>"
Dec 13 09:34:52 embed-certs-594077 dockerd[1179]: time="2025-12-13T09:34:52.152379674Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
2f643a44c0947 6e38f40d628db 3 seconds ago Running storage-provisioner 2 f0fe97ebd2fa8 storage-provisioner kube-system
9d744ae1656d5 kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 51 seconds ago Running kubernetes-dashboard 0 655aad46b16e4 kubernetes-dashboard-855c9754f9-5ckvx kubernetes-dashboard
e383d4e28bee5 56cc512116c8f About a minute ago Running busybox 1 88fedb324336f busybox default
3500352ae1887 52546a367cc9e About a minute ago Running coredns 1 02321cceca25c coredns-66bc5c9577-sbl6b kube-system
de05857e10ed1 6e38f40d628db About a minute ago Exited storage-provisioner 1 f0fe97ebd2fa8 storage-provisioner kube-system
3b9abac9a0e5e 8aa150647e88a About a minute ago Running kube-proxy 1 0185479b8f1ac kube-proxy-gbh4v kube-system
652f8878d5fe5 a3e246e9556e9 About a minute ago Running etcd 1 06277dacc9521 etcd-embed-certs-594077 kube-system
8cac4fb329021 88320b5498ff2 About a minute ago Running kube-scheduler 1 a72c06cffcc53 kube-scheduler-embed-certs-594077 kube-system
bcf2fd0416777 01e8bacf0f500 About a minute ago Running kube-controller-manager 1 064f32bea94a2 kube-controller-manager-embed-certs-594077 kube-system
ea6f4d67228a1 a5f569d49a979 About a minute ago Running kube-apiserver 1 d5b4d42f70f7a kube-apiserver-embed-certs-594077 kube-system
ceb2c2191e490 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 2 minutes ago Exited busybox 0 1082bf842642a busybox default
a2c91c9fb48e6 52546a367cc9e 2 minutes ago Exited coredns 0 299749fc58f7b coredns-66bc5c9577-sbl6b kube-system
08fadc68f466b 8aa150647e88a 2 minutes ago Exited kube-proxy 0 acc0a3cff3053 kube-proxy-gbh4v kube-system
6e6c8e89a43c7 a5f569d49a979 3 minutes ago Exited kube-apiserver 0 3f64649de4057 kube-apiserver-embed-certs-594077 kube-system
d6604faaddf3f a3e246e9556e9 3 minutes ago Exited etcd 0 45afc8f5a4c50 etcd-embed-certs-594077 kube-system
4b2a5a8f531e3 01e8bacf0f500 3 minutes ago Exited kube-controller-manager 0 b2be7e1ac613b kube-controller-manager-embed-certs-594077 kube-system
cf9e0b0dcbf9b 88320b5498ff2 3 minutes ago Exited kube-scheduler 0 356edcdb1aadc kube-scheduler-embed-certs-594077 kube-system
==> coredns [3500352ae188] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:52332 - 31954 "HINFO IN 7552130428793522761.6479760196523847134. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.112161133s
==> coredns [a2c91c9fb48e] <==
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
[INFO] Reloading complete
[INFO] 127.0.0.1:36208 - 41315 "HINFO IN 1358106524289017339.4675404298798629450. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.043234961s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: embed-certs-594077
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=embed-certs-594077
kubernetes.io/os=linux
minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
minikube.k8s.io/name=embed-certs-594077
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_13T09_31_49_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 13 Dec 2025 09:31:45 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: embed-certs-594077
AcquireTime: <unset>
RenewTime: Sat, 13 Dec 2025 09:34:50 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 13 Dec 2025 09:34:51 +0000 Sat, 13 Dec 2025 09:31:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 13 Dec 2025 09:34:51 +0000 Sat, 13 Dec 2025 09:31:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 13 Dec 2025 09:34:51 +0000 Sat, 13 Dec 2025 09:31:43 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 13 Dec 2025 09:34:51 +0000 Sat, 13 Dec 2025 09:33:49 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.5
Hostname: embed-certs-594077
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3035908Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3035908Ki
pods: 110
System Info:
Machine ID: 3f9ed15ee5214a3682f9a8b37f59f7e2
System UUID: 3f9ed15e-e521-4a36-82f9-a8b37f59f7e2
Boot ID: 5905dae6-5187-479b-bc88-9a3ad2e0e23b
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://28.5.2
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m18s
kube-system coredns-66bc5c9577-sbl6b 100m (5%) 0 (0%) 70Mi (2%) 170Mi (5%) 3m
kube-system etcd-embed-certs-594077 100m (5%) 0 (0%) 100Mi (3%) 0 (0%) 3m6s
kube-system kube-apiserver-embed-certs-594077 250m (12%) 0 (0%) 0 (0%) 0 (0%) 3m6s
kube-system kube-controller-manager-embed-certs-594077 200m (10%) 0 (0%) 0 (0%) 0 (0%) 3m6s
kube-system kube-proxy-gbh4v 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m
kube-system kube-scheduler-embed-certs-594077 100m (5%) 0 (0%) 0 (0%) 0 (0%) 3m6s
kube-system metrics-server-746fcd58dc-r9qzb 100m (5%) 0 (0%) 200Mi (6%) 0 (0%) 2m7s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m56s
kubernetes-dashboard dashboard-metrics-scraper-6ffb444bf9-42zcv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 63s
kubernetes-dashboard kubernetes-dashboard-855c9754f9-5ckvx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 63s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 370Mi (12%) 170Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m57s kube-proxy
Normal Starting 67s kube-proxy
Normal Starting 3m14s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m14s (x8 over 3m14s) kubelet Node embed-certs-594077 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m14s (x8 over 3m14s) kubelet Node embed-certs-594077 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m14s (x7 over 3m14s) kubelet Node embed-certs-594077 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m14s kubelet Updated Node Allocatable limit across pods
Normal Starting 3m6s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 3m6s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 3m6s kubelet Node embed-certs-594077 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m6s kubelet Node embed-certs-594077 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m6s kubelet Node embed-certs-594077 status is now: NodeHasSufficientPID
Normal NodeReady 3m2s kubelet Node embed-certs-594077 status is now: NodeReady
Normal RegisteredNode 3m1s node-controller Node embed-certs-594077 event: Registered Node embed-certs-594077 in Controller
Normal Starting 76s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 76s (x8 over 76s) kubelet Node embed-certs-594077 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 76s (x8 over 76s) kubelet Node embed-certs-594077 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 76s (x7 over 76s) kubelet Node embed-certs-594077 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 76s kubelet Updated Node Allocatable limit across pods
Warning Rebooted 70s kubelet Node embed-certs-594077 has been rebooted, boot id: 5905dae6-5187-479b-bc88-9a3ad2e0e23b
Normal RegisteredNode 64s node-controller Node embed-certs-594077 event: Registered Node embed-certs-594077 in Controller
Normal Starting 4s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 3s kubelet Node embed-certs-594077 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3s kubelet Node embed-certs-594077 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3s kubelet Node embed-certs-594077 status is now: NodeHasSufficientPID
==> dmesg <==
[Dec13 09:33] Booted with the nomodeset parameter. Only the system framebuffer will be available
[ +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
[ +0.001642] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +0.004177] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
[ +0.886210] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000026] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +0.119341] kauditd_printk_skb: 1 callbacks suppressed
[ +0.139009] kauditd_printk_skb: 421 callbacks suppressed
[ +8.056256] kauditd_printk_skb: 193 callbacks suppressed
[ +2.474062] kauditd_printk_skb: 128 callbacks suppressed
[ +0.838025] kauditd_printk_skb: 259 callbacks suppressed
[Dec13 09:34] kauditd_printk_skb: 2 callbacks suppressed
[ +0.277121] kauditd_printk_skb: 11 callbacks suppressed
[ +0.213014] kauditd_printk_skb: 35 callbacks suppressed
==> etcd [652f8878d5fe] <==
{"level":"warn","ts":"2025-12-13T09:33:42.931665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:42.982941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53938","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.000194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53944","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.029221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53970","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.044582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53974","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.070524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53996","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.114197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54022","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.160976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54040","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.185685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54060","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.223885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54098","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.236727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54120","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.245766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54082","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.265875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54136","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.278357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.291236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54174","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.309255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54196","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.331705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54224","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.390604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54242","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.413099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54254","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.484942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54256","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.540577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54264","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.562762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54284","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.586381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54300","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.605698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54320","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:33:43.726718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54340","server-name":"","error":"EOF"}
==> etcd [d6604faaddf3] <==
{"level":"warn","ts":"2025-12-13T09:31:43.869789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51370","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:31:43.901134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51388","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:31:43.939127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51400","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:31:43.963838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51414","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:31:43.987753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51430","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:31:44.019434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51448","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-13T09:31:44.214190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51468","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-12-13T09:32:49.290228Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-12-13T09:32:49.290316Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"embed-certs-594077","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"]}
{"level":"error","ts":"2025-12-13T09:32:49.290413Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-12-13T09:32:56.297824Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-12-13T09:32:56.297931Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-13T09:32:56.297953Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c5263387c79c0223","current-leader-member-id":"c5263387c79c0223"}
{"level":"info","ts":"2025-12-13T09:32:56.298053Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
{"level":"info","ts":"2025-12-13T09:32:56.298064Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"warn","ts":"2025-12-13T09:32:56.298487Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
{"level":"warn","ts":"2025-12-13T09:32:56.298533Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
{"level":"error","ts":"2025-12-13T09:32:56.298541Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.5:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"warn","ts":"2025-12-13T09:32:56.300679Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-12-13T09:32:56.300980Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"error","ts":"2025-12-13T09:32:56.301169Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-13T09:32:56.471124Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.5:2380"}
{"level":"error","ts":"2025-12-13T09:32:56.471216Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.5:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-13T09:32:56.471279Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.5:2380"}
{"level":"info","ts":"2025-12-13T09:32:56.471291Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"embed-certs-594077","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"]}
==> kernel <==
09:34:54 up 1 min, 0 users, load average: 1.24, 0.63, 0.24
Linux embed-certs-594077 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [6e6c8e89a43c] <==
W1213 09:32:58.579780 1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.586497 1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.612833 1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.628739 1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.634886 1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.673263 1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.687212 1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.757751 1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.783556 1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.793183 1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.801878 1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.893696 1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.903550 1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:58.951767 1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.041688 1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.050000 1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.059710 1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.112782 1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.136210 1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.177829 1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.229503 1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.241149 1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.252494 1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.257821 1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1213 09:32:59.277534 1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
==> kube-apiserver [ea6f4d67228a] <==
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I1213 09:33:45.920199 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1213 09:33:48.019787 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
W1213 09:33:48.110189 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.5]
I1213 09:33:48.112210 1 controller.go:667] quota admission added evaluator for: endpoints
I1213 09:33:48.682383 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1213 09:33:48.768099 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1213 09:33:48.837638 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1213 09:33:48.853781 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1213 09:33:50.490654 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I1213 09:33:50.491397 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1213 09:33:51.267285 1 controller.go:667] quota admission added evaluator for: namespaces
I1213 09:33:52.310927 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.138.136"}
I1213 09:33:52.369329 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.120.51"}
W1213 09:34:49.882399 1 handler_proxy.go:99] no RequestInfo found in the context
E1213 09:34:49.882606 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I1213 09:34:49.882638 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W1213 09:34:49.888307 1 handler_proxy.go:99] no RequestInfo found in the context
E1213 09:34:49.888364 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
I1213 09:34:49.888377 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
==> kube-controller-manager [4b2a5a8f531e] <==
I1213 09:31:53.336039 1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
I1213 09:31:53.336048 1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
I1213 09:31:53.336056 1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
I1213 09:31:53.343453 1 shared_informer.go:356] "Caches are synced" controller="service account"
I1213 09:31:53.353435 1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
I1213 09:31:53.355442 1 shared_informer.go:356] "Caches are synced" controller="stateful set"
I1213 09:31:53.355525 1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
I1213 09:31:53.357255 1 shared_informer.go:356] "Caches are synced" controller="GC"
I1213 09:31:53.358433 1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-594077" podCIDRs=["10.244.0.0/24"]
I1213 09:31:53.360081 1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
I1213 09:31:53.360496 1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
I1213 09:31:53.358818 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1213 09:31:53.361073 1 shared_informer.go:356] "Caches are synced" controller="namespace"
I1213 09:31:53.361384 1 shared_informer.go:356] "Caches are synced" controller="expand"
I1213 09:31:53.358828 1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
I1213 09:31:53.361985 1 shared_informer.go:356] "Caches are synced" controller="attach detach"
I1213 09:31:53.362214 1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
I1213 09:31:53.363737 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1213 09:31:53.363994 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1213 09:31:53.364667 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I1213 09:31:53.370811 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
I1213 09:31:53.370831 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1213 09:31:53.370965 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1213 09:31:53.370971 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I1213 09:31:53.387013 1 shared_informer.go:356] "Caches are synced" controller="PV protection"
==> kube-controller-manager [bcf2fd041677] <==
I1213 09:33:50.374959 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
I1213 09:33:50.374993 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
I1213 09:33:50.375011 1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
I1213 09:33:50.375874 1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
I1213 09:33:50.403422 1 shared_informer.go:356] "Caches are synced" controller="attach detach"
I1213 09:33:50.396647 1 shared_informer.go:356] "Caches are synced" controller="PV protection"
I1213 09:33:50.396659 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1213 09:33:50.420341 1 shared_informer.go:356] "Caches are synced" controller="stateful set"
I1213 09:33:50.435314 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1213 09:33:50.442414 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1213 09:33:50.443866 1 shared_informer.go:356] "Caches are synced" controller="endpoint"
I1213 09:33:50.450389 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1213 09:33:50.450482 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1213 09:33:50.450491 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
E1213 09:33:51.616002 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:33:51.697375 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:33:51.734843 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:33:51.784632 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:33:51.793970 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:33:51.832688 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:33:51.832688 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:33:51.851357 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:33:51.871143 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1213 09:34:50.051560 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I1213 09:34:50.064726 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
==> kube-proxy [08fadc68f466] <==
I1213 09:31:56.508637 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1213 09:31:56.609891 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1213 09:31:56.609953 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.5"]
E1213 09:31:56.610205 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1213 09:31:56.804733 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1213 09:31:56.804847 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1213 09:31:56.804896 1 server_linux.go:132] "Using iptables Proxier"
I1213 09:31:56.865819 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1213 09:31:56.878175 1 server.go:527] "Version info" version="v1.34.2"
I1213 09:31:56.879530 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1213 09:31:56.898538 1 config.go:200] "Starting service config controller"
I1213 09:31:56.898916 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1213 09:31:56.899076 1 config.go:106] "Starting endpoint slice config controller"
I1213 09:31:56.899279 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1213 09:31:56.899942 1 config.go:309] "Starting node config controller"
I1213 09:31:56.900469 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1213 09:31:56.900662 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1213 09:31:56.906769 1 config.go:403] "Starting serviceCIDR config controller"
I1213 09:31:56.908443 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1213 09:31:57.000158 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1213 09:31:57.001827 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1213 09:31:57.009299 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-proxy [3b9abac9a0e5] <==
I1213 09:33:47.146798 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1213 09:33:47.248054 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1213 09:33:47.248124 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.5"]
E1213 09:33:47.248660 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1213 09:33:47.305149 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1213 09:33:47.305245 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1213 09:33:47.305313 1 server_linux.go:132] "Using iptables Proxier"
I1213 09:33:47.321084 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1213 09:33:47.321935 1 server.go:527] "Version info" version="v1.34.2"
I1213 09:33:47.321978 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1213 09:33:47.329818 1 config.go:309] "Starting node config controller"
I1213 09:33:47.329864 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1213 09:33:47.329872 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1213 09:33:47.330417 1 config.go:200] "Starting service config controller"
I1213 09:33:47.330506 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1213 09:33:47.330530 1 config.go:106] "Starting endpoint slice config controller"
I1213 09:33:47.330533 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1213 09:33:47.330543 1 config.go:403] "Starting serviceCIDR config controller"
I1213 09:33:47.330546 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1213 09:33:47.431478 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1213 09:33:47.431478 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1213 09:33:47.431538 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [8cac4fb32902] <==
I1213 09:33:44.776239 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1213 09:33:44.788584 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1213 09:33:44.789576 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1213 09:33:44.793480 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1213 09:33:44.793495 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
E1213 09:33:44.834891 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1213 09:33:44.835669 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1213 09:33:44.836791 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1213 09:33:44.836887 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1213 09:33:44.836961 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1213 09:33:44.837040 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1213 09:33:44.837096 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1213 09:33:44.837155 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1213 09:33:44.837891 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1213 09:33:44.838140 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1213 09:33:44.838370 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1213 09:33:44.838427 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1213 09:33:44.838516 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1213 09:33:44.838546 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1213 09:33:44.838598 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1213 09:33:44.840424 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1213 09:33:44.840886 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1213 09:33:44.842576 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1213 09:33:44.842610 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
I1213 09:33:46.493666 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kube-scheduler [cf9e0b0dcbf9] <==
E1213 09:31:45.673059 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1213 09:31:45.672922 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1213 09:31:45.674367 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1213 09:31:45.674656 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1213 09:31:45.675121 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1213 09:31:45.675359 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1213 09:31:46.593861 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1213 09:31:46.620372 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1213 09:31:46.637400 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1213 09:31:46.671885 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1213 09:31:46.675254 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1213 09:31:46.679901 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1213 09:31:46.741636 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1213 09:31:46.785658 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1213 09:31:46.807123 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1213 09:31:46.817797 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1213 09:31:46.982287 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1213 09:31:47.004930 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1213 09:31:47.032549 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
I1213 09:31:50.148815 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1213 09:32:49.158251 1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
I1213 09:32:49.158342 1 server.go:263] "[graceful-termination] secure server has stopped listening"
I1213 09:32:49.158389 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
I1213 09:32:49.158466 1 server.go:265] "[graceful-termination] secure server is exiting"
E1213 09:32:49.158499 1 run.go:72] "command failed" err="finished without leader elect"
==> kubelet <==
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.476932 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-ca-certs\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.476983 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-flexvolume-dir\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477010 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2791db0168e112ad1ebb49d47ad7acc4-kubeconfig\") pod \"kube-scheduler-embed-certs-594077\" (UID: \"2791db0168e112ad1ebb49d47ad7acc4\") " pod="kube-system/kube-scheduler-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477028 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/37016cb987c6e71339040b3541624dd3-etcd-certs\") pod \"etcd-embed-certs-594077\" (UID: \"37016cb987c6e71339040b3541624dd3\") " pod="kube-system/etcd-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477043 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fab6b178c44aca2623e51c535287727e-k8s-certs\") pod \"kube-apiserver-embed-certs-594077\" (UID: \"fab6b178c44aca2623e51c535287727e\") " pod="kube-system/kube-apiserver-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477057 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fab6b178c44aca2623e51c535287727e-usr-share-ca-certificates\") pod \"kube-apiserver-embed-certs-594077\" (UID: \"fab6b178c44aca2623e51c535287727e\") " pod="kube-system/kube-apiserver-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477075 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-k8s-certs\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477090 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-kubeconfig\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477105 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e72f6729e76bef19c11e9f77a5bbfed1-usr-share-ca-certificates\") pod \"kube-controller-manager-embed-certs-594077\" (UID: \"e72f6729e76bef19c11e9f77a5bbfed1\") " pod="kube-system/kube-controller-manager-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477118 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/37016cb987c6e71339040b3541624dd3-etcd-data\") pod \"etcd-embed-certs-594077\" (UID: \"37016cb987c6e71339040b3541624dd3\") " pod="kube-system/etcd-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.477149 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fab6b178c44aca2623e51c535287727e-ca-certs\") pod \"kube-apiserver-embed-certs-594077\" (UID: \"fab6b178c44aca2623e51c535287727e\") " pod="kube-system/kube-apiserver-embed-certs-594077"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.552903 4246 apiserver.go:52] "Watching apiserver"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.598581 4246 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.678949 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c24123e2-281a-4e07-83eb-bf2a70ed9689-lib-modules\") pod \"kube-proxy-gbh4v\" (UID: \"c24123e2-281a-4e07-83eb-bf2a70ed9689\") " pod="kube-system/kube-proxy-gbh4v"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.679096 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c24123e2-281a-4e07-83eb-bf2a70ed9689-xtables-lock\") pod \"kube-proxy-gbh4v\" (UID: \"c24123e2-281a-4e07-83eb-bf2a70ed9689\") " pod="kube-system/kube-proxy-gbh4v"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.679116 4246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1db9cb1e-bc7a-4d9f-9042-936fcad750f7-tmp\") pod \"storage-provisioner\" (UID: \"1db9cb1e-bc7a-4d9f-9042-936fcad750f7\") " pod="kube-system/storage-provisioner"
Dec 13 09:34:51 embed-certs-594077 kubelet[4246]: I1213 09:34:51.864587 4246 scope.go:117] "RemoveContainer" containerID="de05857e10ed14d338591b8d140c8fdbffcc13e5cdf3dc4d04b3f6eabfd47af5"
Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.113944 4246 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.114004 4246 kuberuntime_image.go:43] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.114210 4246 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-6ffb444bf9-42zcv_kubernetes-dashboard(7123ef17-ca61-4aa4-a10e-b29ec51a6667): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.114247 4246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-42zcv" podUID="7123ef17-ca61-4aa4-a10e-b29ec51a6667"
Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.153914 4246 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.154760 4246 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.154958 4246 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-r9qzb_kube-system(25f7da03-5692-48c6-8b6e-22b84e1aec43): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" logger="UnhandledError"
Dec 13 09:34:52 embed-certs-594077 kubelet[4246]: E1213 09:34:52.155038 4246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-r9qzb" podUID="25f7da03-5692-48c6-8b6e-22b84e1aec43"
==> kubernetes-dashboard [9d744ae1656d] <==
2025/12/13 09:34:04 Using namespace: kubernetes-dashboard
2025/12/13 09:34:04 Using in-cluster config to connect to apiserver
2025/12/13 09:34:04 Using secret token for csrf signing
2025/12/13 09:34:04 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/12/13 09:34:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/12/13 09:34:04 Successful initial request to the apiserver, version: v1.34.2
2025/12/13 09:34:04 Generating JWE encryption key
2025/12/13 09:34:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/12/13 09:34:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/12/13 09:34:04 Initializing JWE encryption key from synchronized object
2025/12/13 09:34:04 Creating in-cluster Sidecar client
2025/12/13 09:34:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/12/13 09:34:04 Serving insecurely on HTTP port: 9090
2025/12/13 09:34:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/12/13 09:34:04 Starting overwatch
==> storage-provisioner [2f643a44c094] <==
I1213 09:34:52.258732 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1213 09:34:52.299738 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1213 09:34:52.300284 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
W1213 09:34:52.305497 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [de05857e10ed] <==
I1213 09:33:46.911515 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1213 09:34:16.919520 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-594077 -n embed-certs-594077
helpers_test.go:270: (dbg) Run: kubectl --context embed-certs-594077 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-r9qzb dashboard-metrics-scraper-6ffb444bf9-42zcv
helpers_test.go:283: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run: kubectl --context embed-certs-594077 describe pod metrics-server-746fcd58dc-r9qzb dashboard-metrics-scraper-6ffb444bf9-42zcv
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context embed-certs-594077 describe pod metrics-server-746fcd58dc-r9qzb dashboard-metrics-scraper-6ffb444bf9-42zcv: exit status 1 (66.070866ms)
** stderr **
Error from server (NotFound): pods "metrics-server-746fcd58dc-r9qzb" not found
Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-42zcv" not found
** /stderr **
helpers_test.go:288: kubectl --context embed-certs-594077 describe pod metrics-server-746fcd58dc-r9qzb dashboard-metrics-scraper-6ffb444bf9-42zcv: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (39.30s)