=== RUN TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run: out/minikube-linux-amd64 pause -p no-preload-600035 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-600035 --alsologtostderr -v=1: (2.475505207s)
start_stop_delete_test.go:311: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-600035 -n no-preload-600035
E0311 21:16:23.302088 18140 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-10888/.minikube/profiles/gvisor-787339/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-600035 -n no-preload-600035: exit status 2 (15.892177149s)
-- stdout --
Stopped
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-600035 -n no-preload-600035
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-600035 -n no-preload-600035: exit status 2 (15.885385074s)
-- stdout --
Stopped
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run: out/minikube-linux-amd64 unpause -p no-preload-600035 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-600035 -n no-preload-600035
start_stop_delete_test.go:311: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-600035 -n no-preload-600035
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-600035 -n no-preload-600035
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p no-preload-600035 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-600035 logs -n 25: (1.325273107s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| addons | enable metrics-server -p newest-cni-634063 | newest-cni-634063 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p newest-cni-634063 | newest-cni-634063 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | --alsologtostderr -v=3 | | | | | |
| image | default-k8s-diff-port-469030 | default-k8s-diff-port-469030 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | image list --format=json | | | | | |
| pause | -p | default-k8s-diff-port-469030 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | default-k8s-diff-port-469030 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p | default-k8s-diff-port-469030 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | default-k8s-diff-port-469030 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p | default-k8s-diff-port-469030 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | default-k8s-diff-port-469030 | | | | | |
| delete | -p | default-k8s-diff-port-469030 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | default-k8s-diff-port-469030 | | | | | |
| start | -p auto-426800 --memory=3072 | auto-426800 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | |
| | --alsologtostderr --wait=true | | | | | |
| | --wait-timeout=15m | | | | | |
| | --driver=kvm2 | | | | | |
| addons | enable dashboard -p newest-cni-634063 | newest-cni-634063 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p newest-cni-634063 --memory=2200 --alsologtostderr | newest-cni-634063 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:16 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --feature-gates ServerSideApply=true | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --kubernetes-version=v1.29.0-rc.2 | | | | | |
| image | old-k8s-version-842886 image | old-k8s-version-842886 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | list --format=json | | | | | |
| pause | -p old-k8s-version-842886 | old-k8s-version-842886 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p old-k8s-version-842886 | old-k8s-version-842886 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p old-k8s-version-842886 | old-k8s-version-842886 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| delete | -p old-k8s-version-842886 | old-k8s-version-842886 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| start | -p kindnet-426800 | kindnet-426800 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | |
| | --memory=3072 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --wait-timeout=15m | | | | | |
| | --cni=kindnet --driver=kvm2 | | | | | |
| image | no-preload-600035 image list | no-preload-600035 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-600035 | no-preload-600035 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
| | --alsologtostderr -v=1 | | | | | |
| image | newest-cni-634063 image list | newest-cni-634063 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
| | --format=json | | | | | |
| pause | -p newest-cni-634063 | newest-cni-634063 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p newest-cni-634063 | newest-cni-634063 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p newest-cni-634063 | newest-cni-634063 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
| unpause | -p no-preload-600035 | no-preload-600035 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p newest-cni-634063 | newest-cni-634063 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
| start | -p calico-426800 --memory=3072 | calico-426800 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | |
| | --alsologtostderr --wait=true | | | | | |
| | --wait-timeout=15m | | | | | |
| | --cni=calico --driver=kvm2 | | | | | |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/03/11 21:16:52
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.22.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0311 21:16:52.967285 60164 out.go:291] Setting OutFile to fd 1 ...
I0311 21:16:52.967447 60164 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 21:16:52.967460 60164 out.go:304] Setting ErrFile to fd 2...
I0311 21:16:52.967466 60164 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 21:16:52.967671 60164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-10888/.minikube/bin
I0311 21:16:52.968262 60164 out.go:298] Setting JSON to false
I0311 21:16:52.969232 60164 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7111,"bootTime":1710184702,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0311 21:16:52.969297 60164 start.go:139] virtualization: kvm guest
I0311 21:16:52.971420 60164 out.go:177] * [calico-426800] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
I0311 21:16:52.972672 60164 notify.go:220] Checking for updates...
I0311 21:16:52.974166 60164 out.go:177] - MINIKUBE_LOCATION=18358
I0311 21:16:52.975631 60164 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0311 21:16:52.977062 60164 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/18358-10888/kubeconfig
I0311 21:16:52.978437 60164 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-10888/.minikube
I0311 21:16:52.979763 60164 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0311 21:16:52.981027 60164 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0311 21:16:52.982649 60164 config.go:182] Loaded profile config "auto-426800": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 21:16:52.982766 60164 config.go:182] Loaded profile config "kindnet-426800": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 21:16:52.982873 60164 config.go:182] Loaded profile config "no-preload-600035": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
I0311 21:16:52.983009 60164 driver.go:392] Setting default libvirt URI to qemu:///system
I0311 21:16:53.024306 60164 out.go:177] * Using the kvm2 driver based on user configuration
I0311 21:16:53.025751 60164 start.go:297] selected driver: kvm2
I0311 21:16:53.025779 60164 start.go:901] validating driver "kvm2" against <nil>
I0311 21:16:53.025796 60164 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0311 21:16:53.026947 60164 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0311 21:16:53.027036 60164 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-10888/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0311 21:16:53.043756 60164 install.go:137] /home/jenkins/minikube-integration/18358-10888/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
I0311 21:16:53.043817 60164 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0311 21:16:53.044126 60164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0311 21:16:53.044205 60164 cni.go:84] Creating CNI manager for "calico"
I0311 21:16:53.044219 60164 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
I0311 21:16:53.044318 60164 start.go:340] cluster config:
{Name:calico-426800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-426800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0311 21:16:53.044457 60164 iso.go:125] acquiring lock: {Name:mk2e75d88efec20ef8758b0fc6ce4592a5af6b76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0311 21:16:53.046115 60164 out.go:177] * Starting "calico-426800" primary control-plane node in "calico-426800" cluster
==> Docker <==
Mar 11 21:16:05 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:05.787468423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 11 21:16:05 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:05.787877342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 11 21:16:05 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:05.787997907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 11 21:16:05 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:05.788315122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 11 21:16:08 no-preload-600035 dockerd[826]: time="2024-03-11T21:16:08.747507181Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Mar 11 21:16:08 no-preload-600035 dockerd[826]: time="2024-03-11T21:16:08.747623612Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Mar 11 21:16:08 no-preload-600035 dockerd[826]: time="2024-03-11T21:16:08.750480476Z" level=error msg="Handler for POST /v1.42/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Mar 11 21:16:16 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:16.893961159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 11 21:16:16 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:16.894061489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 11 21:16:16 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:16.894075914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 11 21:16:16 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:16.894751630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 11 21:16:17 no-preload-600035 dockerd[826]: time="2024-03-11T21:16:17.749324824Z" level=info msg="ignoring event" container=58a56fb2208c019c67c39fbdac8cd4b5be36b12bf1f6e9f5ac65f0df0e280ba9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 11 21:16:17 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:17.750590257Z" level=info msg="shim disconnected" id=58a56fb2208c019c67c39fbdac8cd4b5be36b12bf1f6e9f5ac65f0df0e280ba9 namespace=moby
Mar 11 21:16:17 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:17.750663279Z" level=warning msg="cleaning up after shim disconnected" id=58a56fb2208c019c67c39fbdac8cd4b5be36b12bf1f6e9f5ac65f0df0e280ba9 namespace=moby
Mar 11 21:16:17 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:17.750674923Z" level=info msg="cleaning up dead shim" namespace=moby
Mar 11 21:16:19 no-preload-600035 cri-dockerd[1039]: W0311 21:16:19.017861 1039 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
Mar 11 21:16:19 no-preload-600035 cri-dockerd[1039]: W0311 21:16:19.019734 1039 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
Mar 11 21:16:52 no-preload-600035 cri-dockerd[1039]: time="2024-03-11T21:16:52Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
Mar 11 21:16:53 no-preload-600035 dockerd[826]: time="2024-03-11T21:16:53.815479673Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Mar 11 21:16:53 no-preload-600035 dockerd[826]: time="2024-03-11T21:16:53.815879779Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Mar 11 21:16:53 no-preload-600035 dockerd[826]: time="2024-03-11T21:16:53.820629681Z" level=error msg="Handler for POST /v1.42/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Mar 11 21:16:53 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:53.933182003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 11 21:16:53 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:53.936860744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 11 21:16:53 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:53.937054029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 11 21:16:53 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:53.937549164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
d13f8bb15854e a90209bb39e3d 1 second ago Exited dashboard-metrics-scraper 3 6d7521bdbb591 dashboard-metrics-scraper-5f989dc9cf-c8lwg
58a56fb2208c0 a90209bb39e3d 38 seconds ago Exited dashboard-metrics-scraper 2 6d7521bdbb591 dashboard-metrics-scraper-5f989dc9cf-c8lwg
e55f5280f745c kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 49 seconds ago Running kubernetes-dashboard 0 d4b7de90a96b0 kubernetes-dashboard-8694d4445c-8rxfv
8cadb500d829f 6e38f40d628db About a minute ago Running storage-provisioner 0 7d773a2cd2baf storage-provisioner
50c41d4e3dc06 cbb01a7bd410d About a minute ago Running coredns 0 2819f50862547 coredns-76f75df574-2586f
746a4d04fe214 cbb01a7bd410d About a minute ago Running coredns 0 a39c608aae826 coredns-76f75df574-kwjz5
70179bc06fdbd cc0a4f00aad7b About a minute ago Running kube-proxy 0 6523019f64041 kube-proxy-299x5
d30446d09e0d7 bbb47a0f83324 About a minute ago Running kube-apiserver 0 1a316434c3a75 kube-apiserver-no-preload-600035
8040c7965a6ea 4270645ed6b7a About a minute ago Running kube-scheduler 0 68b2379f7a1f3 kube-scheduler-no-preload-600035
4a5ea4edaa556 d4e01cdf63970 About a minute ago Running kube-controller-manager 0 d43edecf2b8f3 kube-controller-manager-no-preload-600035
c8d626fb37500 a0eed15eed449 About a minute ago Running etcd 0 5e45c573eb65b etcd-no-preload-600035
==> coredns [50c41d4e3dc0] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
==> coredns [746a4d04fe21] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
==> describe nodes <==
Name: no-preload-600035
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=no-preload-600035
kubernetes.io/os=linux
minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
minikube.k8s.io/name=no-preload-600035
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_03_11T21_15_35_0700
minikube.k8s.io/version=v1.32.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 11 Mar 2024 21:15:32 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: no-preload-600035
AcquireTime: <unset>
RenewTime: Mon, 11 Mar 2024 21:16:52 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 11 Mar 2024 21:16:52 +0000 Mon, 11 Mar 2024 21:15:29 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 11 Mar 2024 21:16:52 +0000 Mon, 11 Mar 2024 21:15:29 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 11 Mar 2024 21:16:52 +0000 Mon, 11 Mar 2024 21:15:29 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Mon, 11 Mar 2024 21:16:52 +0000 Mon, 11 Mar 2024 21:16:52 +0000 KubeletNotReady container runtime status check may not have completed yet
Addresses:
InternalIP: 192.168.50.227
Hostname: no-preload-600035
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164188Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164188Ki
pods: 110
System Info:
Machine ID: 31556c4db3124e468ee2dd7f60420dc4
System UUID: 31556c4d-b312-4e46-8ee2-dd7f60420dc4
Boot ID: 88617f2f-7fb6-4ef9-a1f5-db835e7ed357
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.7
Kubelet Version: v1.29.0-rc.2
Kube-Proxy Version: v1.29.0-rc.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-76f75df574-2586f 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 66s
kube-system coredns-76f75df574-kwjz5 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 66s
kube-system etcd-no-preload-600035 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 79s
kube-system kube-apiserver-no-preload-600035 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 79s
kube-system kube-controller-manager-no-preload-600035 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 79s
kube-system kube-proxy-299x5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 66s
kube-system kube-scheduler-no-preload-600035 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 79s
kube-system metrics-server-57f55c9bc5-mf4kp 100m (5%!)(MISSING) 0 (0%!)(MISSING) 200Mi (9%!)(MISSING) 0 (0%!)(MISSING) 64s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 65s
kubernetes-dashboard dashboard-metrics-scraper-5f989dc9cf-c8lwg 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 63s
kubernetes-dashboard kubernetes-dashboard-8694d4445c-8rxfv 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 63s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%!)(MISSING) 0 (0%!)(MISSING)
memory 440Mi (20%!)(MISSING) 340Mi (16%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 64s kube-proxy
Normal NodeHasSufficientMemory 86s (x8 over 86s) kubelet Node no-preload-600035 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 86s (x8 over 86s) kubelet Node no-preload-600035 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 86s (x7 over 86s) kubelet Node no-preload-600035 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 86s kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 79s kubelet Node no-preload-600035 status is now: NodeHasNoDiskPressure
Normal NodeAllocatableEnforced 79s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 79s kubelet Node no-preload-600035 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 79s kubelet Node no-preload-600035 status is now: NodeHasSufficientPID
Normal Starting 79s kubelet Starting kubelet.
Normal RegisteredNode 67s node-controller Node no-preload-600035 event: Registered Node no-preload-600035 in Controller
Normal Starting 2s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2s kubelet Node no-preload-600035 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2s kubelet Node no-preload-600035 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2s kubelet Node no-preload-600035 status is now: NodeHasSufficientPID
Normal NodeNotReady 2s kubelet Node no-preload-600035 status is now: NodeNotReady
Normal NodeAllocatableEnforced 2s kubelet Updated Node Allocatable limit across pods
==> dmesg <==
[ +0.154138] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
[ +0.215485] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
[ +1.685468] systemd-fstab-generator[992]: Ignoring "noauto" option for root device
[ +0.142156] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
[ +0.127434] systemd-fstab-generator[1016]: Ignoring "noauto" option for root device
[ +0.164091] systemd-fstab-generator[1031]: Ignoring "noauto" option for root device
[ +0.539929] systemd-fstab-generator[1153]: Ignoring "noauto" option for root device
[ +0.068486] kauditd_printk_skb: 348 callbacks suppressed
[ +2.363170] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
[Mar11 21:10] kauditd_printk_skb: 86 callbacks suppressed
[ +21.245096] kauditd_printk_skb: 2 callbacks suppressed
[Mar11 21:11] kauditd_printk_skb: 78 callbacks suppressed
[Mar11 21:15] systemd-fstab-generator[9703]: Ignoring "noauto" option for root device
[ +0.068834] kauditd_printk_skb: 16 callbacks suppressed
[ +7.751246] systemd-fstab-generator[10336]: Ignoring "noauto" option for root device
[ +0.095427] kauditd_printk_skb: 52 callbacks suppressed
[ +12.924256] systemd-fstab-generator[10684]: Ignoring "noauto" option for root device
[ +0.120247] kauditd_printk_skb: 12 callbacks suppressed
[ +5.053402] kauditd_printk_skb: 92 callbacks suppressed
[ +5.620750] kauditd_printk_skb: 2 callbacks suppressed
[Mar11 21:16] kauditd_printk_skb: 4 callbacks suppressed
[ +11.890435] systemd-fstab-generator[12132]: Ignoring "noauto" option for root device
[ +1.739292] systemd-fstab-generator[12309]: Ignoring "noauto" option for root device
[ +32.635482] systemd-fstab-generator[12549]: Ignoring "noauto" option for root device
[ +0.139933] kauditd_printk_skb: 40 callbacks suppressed
==> etcd [c8d626fb3750] <==
{"level":"info","ts":"2024-03-11T21:15:29.295476Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"38d24cc717544b1f","initial-advertise-peer-urls":["https://192.168.50.227:2380"],"listen-peer-urls":["https://192.168.50.227:2380"],"advertise-client-urls":["https://192.168.50.227:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.227:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-03-11T21:15:29.295554Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-03-11T21:15:29.295704Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.227:2380"}
{"level":"info","ts":"2024-03-11T21:15:29.295745Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.227:2380"}
{"level":"info","ts":"2024-03-11T21:15:29.865243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38d24cc717544b1f is starting a new election at term 1"}
{"level":"info","ts":"2024-03-11T21:15:29.865311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38d24cc717544b1f became pre-candidate at term 1"}
{"level":"info","ts":"2024-03-11T21:15:29.865334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38d24cc717544b1f received MsgPreVoteResp from 38d24cc717544b1f at term 1"}
{"level":"info","ts":"2024-03-11T21:15:29.865347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38d24cc717544b1f became candidate at term 2"}
{"level":"info","ts":"2024-03-11T21:15:29.865352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38d24cc717544b1f received MsgVoteResp from 38d24cc717544b1f at term 2"}
{"level":"info","ts":"2024-03-11T21:15:29.865359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38d24cc717544b1f became leader at term 2"}
{"level":"info","ts":"2024-03-11T21:15:29.865538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 38d24cc717544b1f elected leader 38d24cc717544b1f at term 2"}
{"level":"info","ts":"2024-03-11T21:15:29.868166Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"38d24cc717544b1f","local-member-attributes":"{Name:no-preload-600035 ClientURLs:[https://192.168.50.227:2379]}","request-path":"/0/members/38d24cc717544b1f/attributes","cluster-id":"383e33379716a5f9","publish-timeout":"7s"}
{"level":"info","ts":"2024-03-11T21:15:29.868393Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-03-11T21:15:29.868858Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-03-11T21:15:29.869994Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-03-11T21:15:29.870463Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-03-11T21:15:29.870498Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-03-11T21:15:29.872452Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-03-11T21:15:29.87432Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.227:2379"}
{"level":"info","ts":"2024-03-11T21:15:29.874746Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"383e33379716a5f9","local-member-id":"38d24cc717544b1f","cluster-version":"3.5"}
{"level":"info","ts":"2024-03-11T21:15:29.881212Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-03-11T21:15:29.881297Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-03-11T21:15:55.869572Z","caller":"traceutil/trace.go:171","msg":"trace[57820418] transaction","detail":"{read_only:false; response_revision:539; number_of_response:1; }","duration":"109.030454ms","start":"2024-03-11T21:15:55.760519Z","end":"2024-03-11T21:15:55.86955Z","steps":["trace[57820418] 'process raft request' (duration: 108.879964ms)"],"step_count":1}
{"level":"info","ts":"2024-03-11T21:15:58.263059Z","caller":"traceutil/trace.go:171","msg":"trace[612508177] transaction","detail":"{read_only:false; response_revision:542; number_of_response:1; }","duration":"152.391135ms","start":"2024-03-11T21:15:58.110649Z","end":"2024-03-11T21:15:58.26304Z","steps":["trace[612508177] 'process raft request' (duration: 152.254488ms)"],"step_count":1}
{"level":"info","ts":"2024-03-11T21:16:02.919891Z","caller":"traceutil/trace.go:171","msg":"trace[1846342680] transaction","detail":"{read_only:false; response_revision:558; number_of_response:1; }","duration":"104.144457ms","start":"2024-03-11T21:16:02.815723Z","end":"2024-03-11T21:16:02.919867Z","steps":["trace[1846342680] 'process raft request' (duration: 104.000497ms)"],"step_count":1}
==> kernel <==
21:16:54 up 7 min, 0 users, load average: 1.40, 1.03, 0.49
Linux no-preload-600035 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [d30446d09e0d] <==
E0311 21:15:49.946202 1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
I0311 21:15:50.403590 1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.101.213.128"}
W0311 21:15:50.420932 1 handler_proxy.go:93] no RequestInfo found in the context
E0311 21:15:50.420966 1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
W0311 21:15:50.434682 1 handler_proxy.go:93] no RequestInfo found in the context
E0311 21:15:50.434727 1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
E0311 21:15:50.440997 1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
W0311 21:15:50.934895 1 handler_proxy.go:93] no RequestInfo found in the context
E0311 21:15:50.934975 1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0311 21:15:50.934988 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0311 21:15:50.935452 1 handler_proxy.go:93] no RequestInfo found in the context
E0311 21:15:50.935995 1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0311 21:15:50.936034 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0311 21:15:51.450773 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.41.83"}
I0311 21:15:51.536763 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.12.40"}
W0311 21:16:51.985300 1 handler_proxy.go:93] no RequestInfo found in the context
E0311 21:16:51.986236 1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0311 21:16:51.986253 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0311 21:16:52.003219 1 handler_proxy.go:93] no RequestInfo found in the context
E0311 21:16:52.003259 1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0311 21:16:52.003268 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
==> kube-controller-manager [4a5ea4edaa55] <==
I0311 21:15:51.238843 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="36.793623ms"
I0311 21:15:51.238961 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.619µs"
I0311 21:15:51.239997 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.003µs"
I0311 21:15:51.284359 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="32.09µs"
I0311 21:15:51.291760 1 event.go:376] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-8rxfv"
I0311 21:15:51.344302 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="76.971001ms"
I0311 21:15:51.436996 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="90.95466ms"
I0311 21:15:51.437151 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="97.446µs"
I0311 21:15:51.983826 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="56.904µs"
I0311 21:15:52.047581 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="17.473935ms"
I0311 21:15:52.048173 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="182.04µs"
I0311 21:15:53.076054 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="75.46µs"
I0311 21:15:53.133242 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="75.778µs"
I0311 21:15:53.215677 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="44.894332ms"
I0311 21:15:53.216440 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="330.462µs"
I0311 21:15:54.292512 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="67.972µs"
I0311 21:15:59.350245 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.381µs"
I0311 21:16:00.421221 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="127.573µs"
I0311 21:16:01.430817 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.932µs"
I0311 21:16:06.563139 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="34.163323ms"
I0311 21:16:06.564644 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="270.502µs"
E0311 21:16:52.025765 1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
I0311 21:16:52.090944 1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
I0311 21:16:53.645964 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="117.83µs"
I0311 21:16:53.670197 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="196.895µs"
==> kube-proxy [70179bc06fdb] <==
I0311 21:15:49.712644 1 server_others.go:72] "Using iptables proxy"
I0311 21:15:49.728984 1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.227"]
I0311 21:15:49.833461 1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
I0311 21:15:49.833516 1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0311 21:15:49.833537 1 server_others.go:168] "Using iptables Proxier"
I0311 21:15:49.839044 1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0311 21:15:49.839538 1 server.go:865] "Version info" version="v1.29.0-rc.2"
I0311 21:15:49.839576 1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0311 21:15:49.844839 1 config.go:188] "Starting service config controller"
I0311 21:15:49.844925 1 shared_informer.go:311] Waiting for caches to sync for service config
I0311 21:15:49.844970 1 config.go:97] "Starting endpoint slice config controller"
I0311 21:15:49.844999 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0311 21:15:49.845734 1 config.go:315] "Starting node config controller"
I0311 21:15:49.845780 1 shared_informer.go:311] Waiting for caches to sync for node config
I0311 21:15:49.945350 1 shared_informer.go:318] Caches are synced for endpoint slice config
I0311 21:15:49.946673 1 shared_informer.go:318] Caches are synced for service config
I0311 21:15:49.947745 1 shared_informer.go:318] Caches are synced for node config
==> kube-scheduler [8040c7965a6e] <==
W0311 21:15:32.133506 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0311 21:15:32.140353 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0311 21:15:32.133610 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0311 21:15:32.140487 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0311 21:15:32.133730 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0311 21:15:32.140610 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0311 21:15:32.143769 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0311 21:15:32.144065 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0311 21:15:32.987158 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0311 21:15:32.988723 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0311 21:15:33.020565 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0311 21:15:33.020640 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0311 21:15:33.136497 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0311 21:15:33.136639 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0311 21:15:33.142391 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0311 21:15:33.142442 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0311 21:15:33.202709 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0311 21:15:33.202931 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0311 21:15:33.314946 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0311 21:15:33.314978 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0311 21:15:33.341811 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0311 21:15:33.342035 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0311 21:15:33.431946 1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0311 21:15:33.432294 1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0311 21:15:35.275026 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Mar 11 21:16:52 no-preload-600035 kubelet[12556]: E0311 21:16:52.893945 12556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-600035\" already exists" pod="kube-system/etcd-no-preload-600035"
Mar 11 21:16:52 no-preload-600035 kubelet[12556]: E0311 21:16:52.898832 12556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-600035\" already exists" pod="kube-system/kube-scheduler-no-preload-600035"
Mar 11 21:16:52 no-preload-600035 kubelet[12556]: E0311 21:16:52.927425 12556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-600035\" already exists" pod="kube-system/kube-controller-manager-no-preload-600035"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: E0311 21:16:53.047816 12556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-600035\" already exists" pod="kube-system/kube-apiserver-no-preload-600035"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.485213 12556 apiserver.go:52] "Watching apiserver"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.489615 12556 topology_manager.go:215] "Topology Admit Handler" podUID="e82c24e5-e2e3-4dea-b811-a65e12fa7cc6" podNamespace="kube-system" podName="coredns-76f75df574-2586f"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.489750 12556 topology_manager.go:215] "Topology Admit Handler" podUID="ead0e72d-a501-41d6-86ea-47e8348ce7c6" podNamespace="kube-system" podName="coredns-76f75df574-kwjz5"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.489880 12556 topology_manager.go:215] "Topology Admit Handler" podUID="f06459b7-a777-425b-a706-b1fad95b01cb" podNamespace="kube-system" podName="kube-proxy-299x5"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.489969 12556 topology_manager.go:215] "Topology Admit Handler" podUID="6b82d8e0-8f1f-47e8-986d-3e805bb426c5" podNamespace="kube-system" podName="storage-provisioner"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.490034 12556 topology_manager.go:215] "Topology Admit Handler" podUID="150b4dfa-9ef0-4fed-8ed3-cbc1b226d9d9" podNamespace="kube-system" podName="metrics-server-57f55c9bc5-mf4kp"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.490195 12556 topology_manager.go:215] "Topology Admit Handler" podUID="6a7050f1-f5eb-40e1-abc3-e3f05636b55c" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-c8lwg"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.490305 12556 topology_manager.go:215] "Topology Admit Handler" podUID="893589b3-9310-487d-9d0c-cc255c8f7e3a" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-8rxfv"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.541751 12556 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.571500 12556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6b82d8e0-8f1f-47e8-986d-3e805bb426c5-tmp\") pod \"storage-provisioner\" (UID: \"6b82d8e0-8f1f-47e8-986d-3e805bb426c5\") " pod="kube-system/storage-provisioner"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.572084 12556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f06459b7-a777-425b-a706-b1fad95b01cb-lib-modules\") pod \"kube-proxy-299x5\" (UID: \"f06459b7-a777-425b-a706-b1fad95b01cb\") " pod="kube-system/kube-proxy-299x5"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.572579 12556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f06459b7-a777-425b-a706-b1fad95b01cb-xtables-lock\") pod \"kube-proxy-299x5\" (UID: \"f06459b7-a777-425b-a706-b1fad95b01cb\") " pod="kube-system/kube-proxy-299x5"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.792730 12556 scope.go:117] "RemoveContainer" containerID="58a56fb2208c019c67c39fbdac8cd4b5be36b12bf1f6e9f5ac65f0df0e280ba9"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: E0311 21:16:53.821273 12556 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: E0311 21:16:53.821329 12556 kuberuntime_image.go:55] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: E0311 21:16:53.821495 12556 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-v98xx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-mf4kp_kube-system(150b4dfa-9ef0-4fed-8ed3-cbc1b226d9d9): ErrImagePull: Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: E0311 21:16:53.821547 12556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-mf4kp" podUID="150b4dfa-9ef0-4fed-8ed3-cbc1b226d9d9"
Mar 11 21:16:54 no-preload-600035 kubelet[12556]: E0311 21:16:54.224597 12556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-600035\" already exists" pod="kube-system/kube-scheduler-no-preload-600035"
Mar 11 21:16:54 no-preload-600035 kubelet[12556]: E0311 21:16:54.224806 12556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-600035\" already exists" pod="kube-system/kube-controller-manager-no-preload-600035"
Mar 11 21:16:54 no-preload-600035 kubelet[12556]: E0311 21:16:54.228961 12556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-600035\" already exists" pod="kube-system/kube-apiserver-no-preload-600035"
Mar 11 21:16:54 no-preload-600035 kubelet[12556]: E0311 21:16:54.229323 12556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-600035\" already exists" pod="kube-system/etcd-no-preload-600035"
==> kubernetes-dashboard [e55f5280f745] <==
2024/03/11 21:16:05 Using namespace: kubernetes-dashboard
2024/03/11 21:16:05 Using in-cluster config to connect to apiserver
2024/03/11 21:16:05 Using secret token for csrf signing
2024/03/11 21:16:05 Initializing csrf token from kubernetes-dashboard-csrf secret
2024/03/11 21:16:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2024/03/11 21:16:05 Successful initial request to the apiserver, version: v1.29.0-rc.2
2024/03/11 21:16:05 Generating JWE encryption key
2024/03/11 21:16:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2024/03/11 21:16:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2024/03/11 21:16:06 Initializing JWE encryption key from synchronized object
2024/03/11 21:16:06 Creating in-cluster Sidecar client
2024/03/11 21:16:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/03/11 21:16:06 Serving insecurely on HTTP port: 9090
2024/03/11 21:16:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/03/11 21:16:05 Starting overwatch
==> storage-provisioner [8cadb500d829] <==
I0311 21:15:51.976679 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0311 21:15:52.009633 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0311 21:15:52.009905 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0311 21:15:52.051498 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0311 21:15:52.052081 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-600035_50acbd54-de71-4ecb-a307-57232f43d25f!
I0311 21:15:52.058172 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6757b689-1777-450b-89da-297e829759bd", APIVersion:"v1", ResourceVersion:"512", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-600035_50acbd54-de71-4ecb-a307-57232f43d25f became leader
I0311 21:15:52.154363 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-600035_50acbd54-de71-4ecb-a307-57232f43d25f!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-600035 -n no-preload-600035
helpers_test.go:261: (dbg) Run: kubectl --context no-preload-600035 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-mf4kp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context no-preload-600035 describe pod metrics-server-57f55c9bc5-mf4kp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-600035 describe pod metrics-server-57f55c9bc5-mf4kp: exit status 1 (73.96588ms)
** stderr **
Error from server (NotFound): pods "metrics-server-57f55c9bc5-mf4kp" not found
** /stderr **
helpers_test.go:279: kubectl --context no-preload-600035 describe pod metrics-server-57f55c9bc5-mf4kp: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-600035 -n no-preload-600035
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p no-preload-600035 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-600035 logs -n 25: (1.121106731s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| stop | -p newest-cni-634063 | newest-cni-634063 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | --alsologtostderr -v=3 | | | | | |
| image | default-k8s-diff-port-469030 | default-k8s-diff-port-469030 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | image list --format=json | | | | | |
| pause | -p | default-k8s-diff-port-469030 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | default-k8s-diff-port-469030 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p | default-k8s-diff-port-469030 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | default-k8s-diff-port-469030 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p | default-k8s-diff-port-469030 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | default-k8s-diff-port-469030 | | | | | |
| delete | -p | default-k8s-diff-port-469030 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | default-k8s-diff-port-469030 | | | | | |
| start | -p auto-426800 --memory=3072 | auto-426800 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:16 UTC |
| | --alsologtostderr --wait=true | | | | | |
| | --wait-timeout=15m | | | | | |
| | --driver=kvm2 | | | | | |
| addons | enable dashboard -p newest-cni-634063 | newest-cni-634063 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p newest-cni-634063 --memory=2200 --alsologtostderr | newest-cni-634063 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:16 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --feature-gates ServerSideApply=true | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --kubernetes-version=v1.29.0-rc.2 | | | | | |
| image | old-k8s-version-842886 image | old-k8s-version-842886 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | list --format=json | | | | | |
| pause | -p old-k8s-version-842886 | old-k8s-version-842886 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p old-k8s-version-842886 | old-k8s-version-842886 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p old-k8s-version-842886 | old-k8s-version-842886 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| delete | -p old-k8s-version-842886 | old-k8s-version-842886 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
| start | -p kindnet-426800 | kindnet-426800 | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | |
| | --memory=3072 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --wait-timeout=15m | | | | | |
| | --cni=kindnet --driver=kvm2 | | | | | |
| image | no-preload-600035 image list | no-preload-600035 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-600035 | no-preload-600035 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
| | --alsologtostderr -v=1 | | | | | |
| image | newest-cni-634063 image list | newest-cni-634063 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
| | --format=json | | | | | |
| pause | -p newest-cni-634063 | newest-cni-634063 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p newest-cni-634063 | newest-cni-634063 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p newest-cni-634063 | newest-cni-634063 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
| unpause | -p no-preload-600035 | no-preload-600035 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p newest-cni-634063 | newest-cni-634063 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
| start | -p calico-426800 --memory=3072 | calico-426800 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | |
| | --alsologtostderr --wait=true | | | | | |
| | --wait-timeout=15m | | | | | |
| | --cni=calico --driver=kvm2 | | | | | |
| ssh | -p auto-426800 pgrep -a | auto-426800 | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
| | kubelet | | | | | |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/03/11 21:16:52
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.22.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0311 21:16:52.967285 60164 out.go:291] Setting OutFile to fd 1 ...
I0311 21:16:52.967447 60164 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 21:16:52.967460 60164 out.go:304] Setting ErrFile to fd 2...
I0311 21:16:52.967466 60164 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 21:16:52.967671 60164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-10888/.minikube/bin
I0311 21:16:52.968262 60164 out.go:298] Setting JSON to false
I0311 21:16:52.969232 60164 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7111,"bootTime":1710184702,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0311 21:16:52.969297 60164 start.go:139] virtualization: kvm guest
I0311 21:16:52.971420 60164 out.go:177] * [calico-426800] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
I0311 21:16:52.972672 60164 notify.go:220] Checking for updates...
I0311 21:16:52.974166 60164 out.go:177] - MINIKUBE_LOCATION=18358
I0311 21:16:52.975631 60164 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0311 21:16:52.977062 60164 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/18358-10888/kubeconfig
I0311 21:16:52.978437 60164 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-10888/.minikube
I0311 21:16:52.979763 60164 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0311 21:16:52.981027 60164 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0311 21:16:52.982649 60164 config.go:182] Loaded profile config "auto-426800": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 21:16:52.982766 60164 config.go:182] Loaded profile config "kindnet-426800": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 21:16:52.982873 60164 config.go:182] Loaded profile config "no-preload-600035": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
I0311 21:16:52.983009 60164 driver.go:392] Setting default libvirt URI to qemu:///system
I0311 21:16:53.024306 60164 out.go:177] * Using the kvm2 driver based on user configuration
I0311 21:16:53.025751 60164 start.go:297] selected driver: kvm2
I0311 21:16:53.025779 60164 start.go:901] validating driver "kvm2" against <nil>
I0311 21:16:53.025796 60164 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0311 21:16:53.026947 60164 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0311 21:16:53.027036 60164 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-10888/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0311 21:16:53.043756 60164 install.go:137] /home/jenkins/minikube-integration/18358-10888/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
I0311 21:16:53.043817 60164 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0311 21:16:53.044126 60164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0311 21:16:53.044205 60164 cni.go:84] Creating CNI manager for "calico"
I0311 21:16:53.044219 60164 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
I0311 21:16:53.044318 60164 start.go:340] cluster config:
{Name:calico-426800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-426800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0311 21:16:53.044457 60164 iso.go:125] acquiring lock: {Name:mk2e75d88efec20ef8758b0fc6ce4592a5af6b76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0311 21:16:53.046115 60164 out.go:177] * Starting "calico-426800" primary control-plane node in "calico-426800" cluster
I0311 21:16:53.638098 58305 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.578356866s)
I0311 21:16:53.638135 58305 system_svc.go:56] duration metric: took 5.578466891s WaitForService to wait for kubelet
I0311 21:16:53.638146 58305 kubeadm.go:576] duration metric: took 10.131266103s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0311 21:16:53.638167 58305 node_conditions.go:102] verifying NodePressure condition ...
I0311 21:16:53.638098 58305 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/gvisor-addon_2 | docker load": (6.566169519s)
I0311 21:16:53.638228 58305 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-10888/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 from cache
I0311 21:16:53.638260 58305 cache_images.go:123] Successfully loaded all cached images
I0311 21:16:53.638270 58305 cache_images.go:92] duration metric: took 7.987645231s to LoadCachedImages
I0311 21:16:53.638283 58305 cache_images.go:262] succeeded pushing to: auto-426800
I0311 21:16:53.638293 58305 cache_images.go:263] failed pushing to:
I0311 21:16:53.638318 58305 main.go:141] libmachine: Making call to close driver server
I0311 21:16:53.638332 58305 main.go:141] libmachine: (auto-426800) Calling .Close
I0311 21:16:53.638614 58305 main.go:141] libmachine: Successfully made call to close driver server
I0311 21:16:53.638632 58305 main.go:141] libmachine: Making call to close connection to plugin binary
I0311 21:16:53.638647 58305 main.go:141] libmachine: (auto-426800) DBG | Closing plugin on server side
I0311 21:16:53.638687 58305 main.go:141] libmachine: Making call to close driver server
I0311 21:16:53.638700 58305 main.go:141] libmachine: (auto-426800) Calling .Close
I0311 21:16:53.639086 58305 main.go:141] libmachine: Successfully made call to close driver server
I0311 21:16:53.639094 58305 main.go:141] libmachine: (auto-426800) DBG | Closing plugin on server side
I0311 21:16:53.639103 58305 main.go:141] libmachine: Making call to close connection to plugin binary
I0311 21:16:53.642129 58305 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0311 21:16:53.642150 58305 node_conditions.go:123] node cpu capacity is 2
I0311 21:16:53.642159 58305 node_conditions.go:105] duration metric: took 3.983138ms to run NodePressure ...
I0311 21:16:53.642168 58305 start.go:240] waiting for startup goroutines ...
I0311 21:16:53.642175 58305 start.go:245] waiting for cluster config update ...
I0311 21:16:53.642184 58305 start.go:254] writing updated cluster config ...
I0311 21:16:53.642432 58305 ssh_runner.go:195] Run: rm -f paused
I0311 21:16:53.693406 58305 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
I0311 21:16:53.695191 58305 out.go:177] * Done! kubectl is now configured to use "auto-426800" cluster and "default" namespace by default
==> Docker <==
Mar 11 21:16:08 no-preload-600035 dockerd[826]: time="2024-03-11T21:16:08.747507181Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Mar 11 21:16:08 no-preload-600035 dockerd[826]: time="2024-03-11T21:16:08.747623612Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Mar 11 21:16:08 no-preload-600035 dockerd[826]: time="2024-03-11T21:16:08.750480476Z" level=error msg="Handler for POST /v1.42/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Mar 11 21:16:16 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:16.893961159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 11 21:16:16 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:16.894061489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 11 21:16:16 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:16.894075914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 11 21:16:16 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:16.894751630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 11 21:16:17 no-preload-600035 dockerd[826]: time="2024-03-11T21:16:17.749324824Z" level=info msg="ignoring event" container=58a56fb2208c019c67c39fbdac8cd4b5be36b12bf1f6e9f5ac65f0df0e280ba9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 11 21:16:17 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:17.750590257Z" level=info msg="shim disconnected" id=58a56fb2208c019c67c39fbdac8cd4b5be36b12bf1f6e9f5ac65f0df0e280ba9 namespace=moby
Mar 11 21:16:17 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:17.750663279Z" level=warning msg="cleaning up after shim disconnected" id=58a56fb2208c019c67c39fbdac8cd4b5be36b12bf1f6e9f5ac65f0df0e280ba9 namespace=moby
Mar 11 21:16:17 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:17.750674923Z" level=info msg="cleaning up dead shim" namespace=moby
Mar 11 21:16:19 no-preload-600035 cri-dockerd[1039]: W0311 21:16:19.017861 1039 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
Mar 11 21:16:19 no-preload-600035 cri-dockerd[1039]: W0311 21:16:19.019734 1039 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
Mar 11 21:16:52 no-preload-600035 cri-dockerd[1039]: time="2024-03-11T21:16:52Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
Mar 11 21:16:53 no-preload-600035 dockerd[826]: time="2024-03-11T21:16:53.815479673Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Mar 11 21:16:53 no-preload-600035 dockerd[826]: time="2024-03-11T21:16:53.815879779Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Mar 11 21:16:53 no-preload-600035 dockerd[826]: time="2024-03-11T21:16:53.820629681Z" level=error msg="Handler for POST /v1.42/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Mar 11 21:16:53 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:53.933182003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 11 21:16:53 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:53.936860744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 11 21:16:53 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:53.937054029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 11 21:16:53 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:53.937549164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 11 21:16:54 no-preload-600035 dockerd[826]: time="2024-03-11T21:16:54.105971633Z" level=info msg="ignoring event" container=d13f8bb15854eff31a83b3eafa2cdacbeed5d8117241963b0d610c95467a6a0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 11 21:16:54 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:54.107322781Z" level=info msg="shim disconnected" id=d13f8bb15854eff31a83b3eafa2cdacbeed5d8117241963b0d610c95467a6a0a namespace=moby
Mar 11 21:16:54 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:54.108203323Z" level=warning msg="cleaning up after shim disconnected" id=d13f8bb15854eff31a83b3eafa2cdacbeed5d8117241963b0d610c95467a6a0a namespace=moby
Mar 11 21:16:54 no-preload-600035 dockerd[832]: time="2024-03-11T21:16:54.108475902Z" level=info msg="cleaning up dead shim" namespace=moby
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
d13f8bb15854e a90209bb39e3d 3 seconds ago Exited dashboard-metrics-scraper 3 6d7521bdbb591 dashboard-metrics-scraper-5f989dc9cf-c8lwg
e55f5280f745c kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 51 seconds ago Running kubernetes-dashboard 0 d4b7de90a96b0 kubernetes-dashboard-8694d4445c-8rxfv
8cadb500d829f 6e38f40d628db About a minute ago Running storage-provisioner 0 7d773a2cd2baf storage-provisioner
50c41d4e3dc06 cbb01a7bd410d About a minute ago Running coredns 0 2819f50862547 coredns-76f75df574-2586f
746a4d04fe214 cbb01a7bd410d About a minute ago Running coredns 0 a39c608aae826 coredns-76f75df574-kwjz5
70179bc06fdbd cc0a4f00aad7b About a minute ago Running kube-proxy 0 6523019f64041 kube-proxy-299x5
d30446d09e0d7 bbb47a0f83324 About a minute ago Running kube-apiserver 0 1a316434c3a75 kube-apiserver-no-preload-600035
8040c7965a6ea 4270645ed6b7a About a minute ago Running kube-scheduler 0 68b2379f7a1f3 kube-scheduler-no-preload-600035
4a5ea4edaa556 d4e01cdf63970 About a minute ago Running kube-controller-manager 0 d43edecf2b8f3 kube-controller-manager-no-preload-600035
c8d626fb37500 a0eed15eed449 About a minute ago Running etcd 0 5e45c573eb65b etcd-no-preload-600035
==> coredns [50c41d4e3dc0] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
==> coredns [746a4d04fe21] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
==> describe nodes <==
Name: no-preload-600035
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=no-preload-600035
kubernetes.io/os=linux
minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
minikube.k8s.io/name=no-preload-600035
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_03_11T21_15_35_0700
minikube.k8s.io/version=v1.32.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 11 Mar 2024 21:15:32 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: no-preload-600035
AcquireTime: <unset>
RenewTime: Mon, 11 Mar 2024 21:16:52 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 11 Mar 2024 21:16:52 +0000 Mon, 11 Mar 2024 21:15:29 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 11 Mar 2024 21:16:52 +0000 Mon, 11 Mar 2024 21:15:29 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 11 Mar 2024 21:16:52 +0000 Mon, 11 Mar 2024 21:15:29 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Mon, 11 Mar 2024 21:16:52 +0000 Mon, 11 Mar 2024 21:16:52 +0000 KubeletNotReady container runtime status check may not have completed yet
Addresses:
InternalIP: 192.168.50.227
Hostname: no-preload-600035
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164188Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164188Ki
pods: 110
System Info:
Machine ID: 31556c4db3124e468ee2dd7f60420dc4
System UUID: 31556c4d-b312-4e46-8ee2-dd7f60420dc4
Boot ID: 88617f2f-7fb6-4ef9-a1f5-db835e7ed357
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.7
Kubelet Version: v1.29.0-rc.2
Kube-Proxy Version: v1.29.0-rc.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-76f75df574-2586f 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 68s
kube-system coredns-76f75df574-kwjz5 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 68s
kube-system etcd-no-preload-600035 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 81s
kube-system kube-apiserver-no-preload-600035 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 81s
kube-system kube-controller-manager-no-preload-600035 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 81s
kube-system kube-proxy-299x5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 68s
kube-system kube-scheduler-no-preload-600035 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 81s
kube-system metrics-server-57f55c9bc5-mf4kp 100m (5%!)(MISSING) 0 (0%!)(MISSING) 200Mi (9%!)(MISSING) 0 (0%!)(MISSING) 66s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 67s
kubernetes-dashboard dashboard-metrics-scraper-5f989dc9cf-c8lwg 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 65s
kubernetes-dashboard kubernetes-dashboard-8694d4445c-8rxfv 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 65s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%!)(MISSING) 0 (0%!)(MISSING)
memory 440Mi (20%!)(MISSING) 340Mi (16%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 66s kube-proxy
Normal NodeHasSufficientMemory 88s (x8 over 88s) kubelet Node no-preload-600035 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 88s (x8 over 88s) kubelet Node no-preload-600035 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 88s (x7 over 88s) kubelet Node no-preload-600035 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 88s kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 81s kubelet Node no-preload-600035 status is now: NodeHasNoDiskPressure
Normal NodeAllocatableEnforced 81s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 81s kubelet Node no-preload-600035 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 81s kubelet Node no-preload-600035 status is now: NodeHasSufficientPID
Normal Starting 81s kubelet Starting kubelet.
Normal RegisteredNode 69s node-controller Node no-preload-600035 event: Registered Node no-preload-600035 in Controller
Normal Starting 4s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4s kubelet Node no-preload-600035 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4s kubelet Node no-preload-600035 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4s kubelet Node no-preload-600035 status is now: NodeHasSufficientPID
Normal NodeNotReady 4s kubelet Node no-preload-600035 status is now: NodeNotReady
Normal NodeAllocatableEnforced 4s kubelet Updated Node Allocatable limit across pods
==> dmesg <==
[ +0.154138] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
[ +0.215485] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
[ +1.685468] systemd-fstab-generator[992]: Ignoring "noauto" option for root device
[ +0.142156] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
[ +0.127434] systemd-fstab-generator[1016]: Ignoring "noauto" option for root device
[ +0.164091] systemd-fstab-generator[1031]: Ignoring "noauto" option for root device
[ +0.539929] systemd-fstab-generator[1153]: Ignoring "noauto" option for root device
[ +0.068486] kauditd_printk_skb: 348 callbacks suppressed
[ +2.363170] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
[Mar11 21:10] kauditd_printk_skb: 86 callbacks suppressed
[ +21.245096] kauditd_printk_skb: 2 callbacks suppressed
[Mar11 21:11] kauditd_printk_skb: 78 callbacks suppressed
[Mar11 21:15] systemd-fstab-generator[9703]: Ignoring "noauto" option for root device
[ +0.068834] kauditd_printk_skb: 16 callbacks suppressed
[ +7.751246] systemd-fstab-generator[10336]: Ignoring "noauto" option for root device
[ +0.095427] kauditd_printk_skb: 52 callbacks suppressed
[ +12.924256] systemd-fstab-generator[10684]: Ignoring "noauto" option for root device
[ +0.120247] kauditd_printk_skb: 12 callbacks suppressed
[ +5.053402] kauditd_printk_skb: 92 callbacks suppressed
[ +5.620750] kauditd_printk_skb: 2 callbacks suppressed
[Mar11 21:16] kauditd_printk_skb: 4 callbacks suppressed
[ +11.890435] systemd-fstab-generator[12132]: Ignoring "noauto" option for root device
[ +1.739292] systemd-fstab-generator[12309]: Ignoring "noauto" option for root device
[ +32.635482] systemd-fstab-generator[12549]: Ignoring "noauto" option for root device
[ +0.139933] kauditd_printk_skb: 40 callbacks suppressed
==> etcd [c8d626fb3750] <==
{"level":"info","ts":"2024-03-11T21:15:29.295476Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"38d24cc717544b1f","initial-advertise-peer-urls":["https://192.168.50.227:2380"],"listen-peer-urls":["https://192.168.50.227:2380"],"advertise-client-urls":["https://192.168.50.227:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.227:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-03-11T21:15:29.295554Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-03-11T21:15:29.295704Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.227:2380"}
{"level":"info","ts":"2024-03-11T21:15:29.295745Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.227:2380"}
{"level":"info","ts":"2024-03-11T21:15:29.865243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38d24cc717544b1f is starting a new election at term 1"}
{"level":"info","ts":"2024-03-11T21:15:29.865311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38d24cc717544b1f became pre-candidate at term 1"}
{"level":"info","ts":"2024-03-11T21:15:29.865334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38d24cc717544b1f received MsgPreVoteResp from 38d24cc717544b1f at term 1"}
{"level":"info","ts":"2024-03-11T21:15:29.865347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38d24cc717544b1f became candidate at term 2"}
{"level":"info","ts":"2024-03-11T21:15:29.865352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38d24cc717544b1f received MsgVoteResp from 38d24cc717544b1f at term 2"}
{"level":"info","ts":"2024-03-11T21:15:29.865359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38d24cc717544b1f became leader at term 2"}
{"level":"info","ts":"2024-03-11T21:15:29.865538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 38d24cc717544b1f elected leader 38d24cc717544b1f at term 2"}
{"level":"info","ts":"2024-03-11T21:15:29.868166Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"38d24cc717544b1f","local-member-attributes":"{Name:no-preload-600035 ClientURLs:[https://192.168.50.227:2379]}","request-path":"/0/members/38d24cc717544b1f/attributes","cluster-id":"383e33379716a5f9","publish-timeout":"7s"}
{"level":"info","ts":"2024-03-11T21:15:29.868393Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-03-11T21:15:29.868858Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-03-11T21:15:29.869994Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-03-11T21:15:29.870463Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-03-11T21:15:29.870498Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-03-11T21:15:29.872452Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-03-11T21:15:29.87432Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.227:2379"}
{"level":"info","ts":"2024-03-11T21:15:29.874746Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"383e33379716a5f9","local-member-id":"38d24cc717544b1f","cluster-version":"3.5"}
{"level":"info","ts":"2024-03-11T21:15:29.881212Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-03-11T21:15:29.881297Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-03-11T21:15:55.869572Z","caller":"traceutil/trace.go:171","msg":"trace[57820418] transaction","detail":"{read_only:false; response_revision:539; number_of_response:1; }","duration":"109.030454ms","start":"2024-03-11T21:15:55.760519Z","end":"2024-03-11T21:15:55.86955Z","steps":["trace[57820418] 'process raft request' (duration: 108.879964ms)"],"step_count":1}
{"level":"info","ts":"2024-03-11T21:15:58.263059Z","caller":"traceutil/trace.go:171","msg":"trace[612508177] transaction","detail":"{read_only:false; response_revision:542; number_of_response:1; }","duration":"152.391135ms","start":"2024-03-11T21:15:58.110649Z","end":"2024-03-11T21:15:58.26304Z","steps":["trace[612508177] 'process raft request' (duration: 152.254488ms)"],"step_count":1}
{"level":"info","ts":"2024-03-11T21:16:02.919891Z","caller":"traceutil/trace.go:171","msg":"trace[1846342680] transaction","detail":"{read_only:false; response_revision:558; number_of_response:1; }","duration":"104.144457ms","start":"2024-03-11T21:16:02.815723Z","end":"2024-03-11T21:16:02.919867Z","steps":["trace[1846342680] 'process raft request' (duration: 104.000497ms)"],"step_count":1}
==> kernel <==
21:16:56 up 7 min, 0 users, load average: 1.53, 1.07, 0.50
Linux no-preload-600035 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [d30446d09e0d] <==
E0311 21:15:49.946202 1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
I0311 21:15:50.403590 1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.101.213.128"}
W0311 21:15:50.420932 1 handler_proxy.go:93] no RequestInfo found in the context
E0311 21:15:50.420966 1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
W0311 21:15:50.434682 1 handler_proxy.go:93] no RequestInfo found in the context
E0311 21:15:50.434727 1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
E0311 21:15:50.440997 1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
W0311 21:15:50.934895 1 handler_proxy.go:93] no RequestInfo found in the context
E0311 21:15:50.934975 1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0311 21:15:50.934988 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0311 21:15:50.935452 1 handler_proxy.go:93] no RequestInfo found in the context
E0311 21:15:50.935995 1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0311 21:15:50.936034 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0311 21:15:51.450773 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.41.83"}
I0311 21:15:51.536763 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.12.40"}
W0311 21:16:51.985300 1 handler_proxy.go:93] no RequestInfo found in the context
E0311 21:16:51.986236 1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0311 21:16:51.986253 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0311 21:16:52.003219 1 handler_proxy.go:93] no RequestInfo found in the context
E0311 21:16:52.003259 1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0311 21:16:52.003268 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
==> kube-controller-manager [4a5ea4edaa55] <==
I0311 21:15:51.239997 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.003µs"
I0311 21:15:51.284359 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="32.09µs"
I0311 21:15:51.291760 1 event.go:376] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-8rxfv"
I0311 21:15:51.344302 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="76.971001ms"
I0311 21:15:51.436996 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="90.95466ms"
I0311 21:15:51.437151 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="97.446µs"
I0311 21:15:51.983826 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="56.904µs"
I0311 21:15:52.047581 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="17.473935ms"
I0311 21:15:52.048173 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="182.04µs"
I0311 21:15:53.076054 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="75.46µs"
I0311 21:15:53.133242 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="75.778µs"
I0311 21:15:53.215677 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="44.894332ms"
I0311 21:15:53.216440 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="330.462µs"
I0311 21:15:54.292512 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="67.972µs"
I0311 21:15:59.350245 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.381µs"
I0311 21:16:00.421221 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="127.573µs"
I0311 21:16:01.430817 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.932µs"
I0311 21:16:06.563139 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="34.163323ms"
I0311 21:16:06.564644 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="270.502µs"
E0311 21:16:52.025765 1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
I0311 21:16:52.090944 1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
I0311 21:16:53.645964 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="117.83µs"
I0311 21:16:53.670197 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="196.895µs"
I0311 21:16:55.262893 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="104.82µs"
I0311 21:16:56.285999 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.919µs"
==> kube-proxy [70179bc06fdb] <==
I0311 21:15:49.712644 1 server_others.go:72] "Using iptables proxy"
I0311 21:15:49.728984 1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.227"]
I0311 21:15:49.833461 1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
I0311 21:15:49.833516 1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0311 21:15:49.833537 1 server_others.go:168] "Using iptables Proxier"
I0311 21:15:49.839044 1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0311 21:15:49.839538 1 server.go:865] "Version info" version="v1.29.0-rc.2"
I0311 21:15:49.839576 1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0311 21:15:49.844839 1 config.go:188] "Starting service config controller"
I0311 21:15:49.844925 1 shared_informer.go:311] Waiting for caches to sync for service config
I0311 21:15:49.844970 1 config.go:97] "Starting endpoint slice config controller"
I0311 21:15:49.844999 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0311 21:15:49.845734 1 config.go:315] "Starting node config controller"
I0311 21:15:49.845780 1 shared_informer.go:311] Waiting for caches to sync for node config
I0311 21:15:49.945350 1 shared_informer.go:318] Caches are synced for endpoint slice config
I0311 21:15:49.946673 1 shared_informer.go:318] Caches are synced for service config
I0311 21:15:49.947745 1 shared_informer.go:318] Caches are synced for node config
==> kube-scheduler [8040c7965a6e] <==
W0311 21:15:32.133506 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0311 21:15:32.140353 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0311 21:15:32.133610 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0311 21:15:32.140487 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0311 21:15:32.133730 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0311 21:15:32.140610 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0311 21:15:32.143769 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0311 21:15:32.144065 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0311 21:15:32.987158 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0311 21:15:32.988723 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0311 21:15:33.020565 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0311 21:15:33.020640 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0311 21:15:33.136497 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0311 21:15:33.136639 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0311 21:15:33.142391 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0311 21:15:33.142442 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0311 21:15:33.202709 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0311 21:15:33.202931 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0311 21:15:33.314946 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0311 21:15:33.314978 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0311 21:15:33.341811 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0311 21:15:33.342035 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0311 21:15:33.431946 1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0311 21:15:33.432294 1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0311 21:15:35.275026 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.489615 12556 topology_manager.go:215] "Topology Admit Handler" podUID="e82c24e5-e2e3-4dea-b811-a65e12fa7cc6" podNamespace="kube-system" podName="coredns-76f75df574-2586f"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.489750 12556 topology_manager.go:215] "Topology Admit Handler" podUID="ead0e72d-a501-41d6-86ea-47e8348ce7c6" podNamespace="kube-system" podName="coredns-76f75df574-kwjz5"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.489880 12556 topology_manager.go:215] "Topology Admit Handler" podUID="f06459b7-a777-425b-a706-b1fad95b01cb" podNamespace="kube-system" podName="kube-proxy-299x5"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.489969 12556 topology_manager.go:215] "Topology Admit Handler" podUID="6b82d8e0-8f1f-47e8-986d-3e805bb426c5" podNamespace="kube-system" podName="storage-provisioner"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.490034 12556 topology_manager.go:215] "Topology Admit Handler" podUID="150b4dfa-9ef0-4fed-8ed3-cbc1b226d9d9" podNamespace="kube-system" podName="metrics-server-57f55c9bc5-mf4kp"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.490195 12556 topology_manager.go:215] "Topology Admit Handler" podUID="6a7050f1-f5eb-40e1-abc3-e3f05636b55c" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-c8lwg"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.490305 12556 topology_manager.go:215] "Topology Admit Handler" podUID="893589b3-9310-487d-9d0c-cc255c8f7e3a" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-8rxfv"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.541751 12556 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.571500 12556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6b82d8e0-8f1f-47e8-986d-3e805bb426c5-tmp\") pod \"storage-provisioner\" (UID: \"6b82d8e0-8f1f-47e8-986d-3e805bb426c5\") " pod="kube-system/storage-provisioner"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.572084 12556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f06459b7-a777-425b-a706-b1fad95b01cb-lib-modules\") pod \"kube-proxy-299x5\" (UID: \"f06459b7-a777-425b-a706-b1fad95b01cb\") " pod="kube-system/kube-proxy-299x5"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.572579 12556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f06459b7-a777-425b-a706-b1fad95b01cb-xtables-lock\") pod \"kube-proxy-299x5\" (UID: \"f06459b7-a777-425b-a706-b1fad95b01cb\") " pod="kube-system/kube-proxy-299x5"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: I0311 21:16:53.792730 12556 scope.go:117] "RemoveContainer" containerID="58a56fb2208c019c67c39fbdac8cd4b5be36b12bf1f6e9f5ac65f0df0e280ba9"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: E0311 21:16:53.821273 12556 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: E0311 21:16:53.821329 12556 kuberuntime_image.go:55] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: E0311 21:16:53.821495 12556 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-v98xx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-mf4kp_kube-system(150b4dfa-9ef0-4fed-8ed3-cbc1b226d9d9): ErrImagePull: Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
Mar 11 21:16:53 no-preload-600035 kubelet[12556]: E0311 21:16:53.821547 12556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-mf4kp" podUID="150b4dfa-9ef0-4fed-8ed3-cbc1b226d9d9"
Mar 11 21:16:54 no-preload-600035 kubelet[12556]: E0311 21:16:54.224597 12556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-600035\" already exists" pod="kube-system/kube-scheduler-no-preload-600035"
Mar 11 21:16:54 no-preload-600035 kubelet[12556]: E0311 21:16:54.224806 12556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-600035\" already exists" pod="kube-system/kube-controller-manager-no-preload-600035"
Mar 11 21:16:54 no-preload-600035 kubelet[12556]: E0311 21:16:54.228961 12556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-600035\" already exists" pod="kube-system/kube-apiserver-no-preload-600035"
Mar 11 21:16:54 no-preload-600035 kubelet[12556]: E0311 21:16:54.229323 12556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-600035\" already exists" pod="kube-system/etcd-no-preload-600035"
Mar 11 21:16:55 no-preload-600035 kubelet[12556]: I0311 21:16:55.234403 12556 scope.go:117] "RemoveContainer" containerID="d13f8bb15854eff31a83b3eafa2cdacbeed5d8117241963b0d610c95467a6a0a"
Mar 11 21:16:55 no-preload-600035 kubelet[12556]: E0311 21:16:55.234707 12556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-c8lwg_kubernetes-dashboard(6a7050f1-f5eb-40e1-abc3-e3f05636b55c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-c8lwg" podUID="6a7050f1-f5eb-40e1-abc3-e3f05636b55c"
Mar 11 21:16:55 no-preload-600035 kubelet[12556]: I0311 21:16:55.237973 12556 scope.go:117] "RemoveContainer" containerID="58a56fb2208c019c67c39fbdac8cd4b5be36b12bf1f6e9f5ac65f0df0e280ba9"
Mar 11 21:16:56 no-preload-600035 kubelet[12556]: I0311 21:16:56.261477 12556 scope.go:117] "RemoveContainer" containerID="d13f8bb15854eff31a83b3eafa2cdacbeed5d8117241963b0d610c95467a6a0a"
Mar 11 21:16:56 no-preload-600035 kubelet[12556]: E0311 21:16:56.265443 12556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-c8lwg_kubernetes-dashboard(6a7050f1-f5eb-40e1-abc3-e3f05636b55c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-c8lwg" podUID="6a7050f1-f5eb-40e1-abc3-e3f05636b55c"
==> kubernetes-dashboard [e55f5280f745] <==
2024/03/11 21:16:05 Starting overwatch
2024/03/11 21:16:05 Using namespace: kubernetes-dashboard
2024/03/11 21:16:05 Using in-cluster config to connect to apiserver
2024/03/11 21:16:05 Using secret token for csrf signing
2024/03/11 21:16:05 Initializing csrf token from kubernetes-dashboard-csrf secret
2024/03/11 21:16:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2024/03/11 21:16:05 Successful initial request to the apiserver, version: v1.29.0-rc.2
2024/03/11 21:16:05 Generating JWE encryption key
2024/03/11 21:16:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2024/03/11 21:16:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2024/03/11 21:16:06 Initializing JWE encryption key from synchronized object
2024/03/11 21:16:06 Creating in-cluster Sidecar client
2024/03/11 21:16:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/03/11 21:16:06 Serving insecurely on HTTP port: 9090
2024/03/11 21:16:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [8cadb500d829] <==
I0311 21:15:51.976679 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0311 21:15:52.009633 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0311 21:15:52.009905 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0311 21:15:52.051498 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0311 21:15:52.052081 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-600035_50acbd54-de71-4ecb-a307-57232f43d25f!
I0311 21:15:52.058172 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6757b689-1777-450b-89da-297e829759bd", APIVersion:"v1", ResourceVersion:"512", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-600035_50acbd54-de71-4ecb-a307-57232f43d25f became leader
I0311 21:15:52.154363 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-600035_50acbd54-de71-4ecb-a307-57232f43d25f!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-600035 -n no-preload-600035
helpers_test.go:261: (dbg) Run: kubectl --context no-preload-600035 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-mf4kp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context no-preload-600035 describe pod metrics-server-57f55c9bc5-mf4kp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-600035 describe pod metrics-server-57f55c9bc5-mf4kp: exit status 1 (65.923006ms)
** stderr **
Error from server (NotFound): pods "metrics-server-57f55c9bc5-mf4kp" not found
** /stderr **
helpers_test.go:279: kubectl --context no-preload-600035 describe pod metrics-server-57f55c9bc5-mf4kp: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (39.96s)