=== RUN TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect
=== CONT TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run: kubectl --context functional-995621 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run: kubectl --context functional-995621 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-n6gcf" [03272f01-0bfc-4d24-ac51-3fb488960949] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-n6gcf" [03272f01-0bfc-4d24-ac51-3fb488960949] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004210157s
functional_test.go:1649: (dbg) Run: out/minikube-linux-amd64 -p functional-995621 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.30:30878
functional_test.go:1661: error fetching http://192.168.39.30:30878: Get "http://192.168.39.30:30878": dial tcp 192.168.39.30:30878: connect: connection refused
functional_test.go:1661: error fetching http://192.168.39.30:30878: Get "http://192.168.39.30:30878": dial tcp 192.168.39.30:30878: connect: connection refused
functional_test.go:1661: error fetching http://192.168.39.30:30878: Get "http://192.168.39.30:30878": dial tcp 192.168.39.30:30878: connect: connection refused
functional_test.go:1661: error fetching http://192.168.39.30:30878: Get "http://192.168.39.30:30878": dial tcp 192.168.39.30:30878: connect: connection refused
functional_test.go:1661: error fetching http://192.168.39.30:30878: Get "http://192.168.39.30:30878": dial tcp 192.168.39.30:30878: connect: connection refused
functional_test.go:1661: error fetching http://192.168.39.30:30878: Get "http://192.168.39.30:30878": dial tcp 192.168.39.30:30878: connect: connection refused
functional_test.go:1661: error fetching http://192.168.39.30:30878: Get "http://192.168.39.30:30878": dial tcp 192.168.39.30:30878: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.39.30:30878: Get "http://192.168.39.30:30878": dial tcp 192.168.39.30:30878: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run: kubectl --context functional-995621 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name: hello-node-connect-57b4589c47-n6gcf
Namespace: default
Priority: 0
Service Account: default
Node: functional-995621/192.168.39.30
Start Time: Mon, 12 Aug 2024 10:35:08 +0000
Labels: app=hello-node-connect
pod-template-hash=57b4589c47
Annotations: <none>
Status: Running
IP: 10.244.0.5
IPs:
IP: 10.244.0.5
Controlled By: ReplicaSet/hello-node-connect-57b4589c47
Containers:
echoserver:
Container ID: containerd://82d7083546ee0b4756ec09b3acdcd601a91b6a2e1c6ff3c3b4d65a425524d775
Image: registry.k8s.io/echoserver:1.8
Image ID: registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 12 Aug 2024 10:35:12 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8g8fq (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-8g8fq:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 28s default-scheduler Successfully assigned default/hello-node-connect-57b4589c47-n6gcf to functional-995621
Normal Pulling 28s kubelet Pulling image "registry.k8s.io/echoserver:1.8"
Normal Pulled 25s kubelet Successfully pulled image "registry.k8s.io/echoserver:1.8" in 2.756s (2.756s including waiting). Image size: 46237695 bytes.
Normal Created 25s kubelet Created container echoserver
Normal Started 25s kubelet Started container echoserver
functional_test.go:1608: (dbg) Run: kubectl --context functional-995621 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run: kubectl --context functional-995621 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name: hello-node-connect
Namespace: default
Labels: app=hello-node-connect
Annotations: <none>
Selector: app=hello-node-connect
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.74.255
IPs: 10.100.74.255
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30878/TCP
Endpoints: 10.244.0.5:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-995621 -n functional-995621
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p functional-995621 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-995621 logs -n 25: (1.810006108s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs:
-- stdout --
==> Audit <==
|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| mount | -p functional-995621 | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | |
| | /tmp/TestFunctionalparallelMountCmdspecific-port4071843439/001:/mount-9p | | | | | |
| | --alsologtostderr -v=1 --port 46464 | | | | | |
| service | functional-995621 service | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | 12 Aug 24 10:35 UTC |
| | hello-node --url | | | | | |
| ssh | functional-995621 ssh findmnt | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | 12 Aug 24 10:35 UTC |
| | -T /mount-9p | grep 9p | | | | | |
| ssh | functional-995621 ssh -- ls | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | 12 Aug 24 10:35 UTC |
| | -la /mount-9p | | | | | |
| start | -p functional-995621 | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p functional-995621 | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p functional-995621 | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | |
| | --dry-run --alsologtostderr | | | | | |
| | -v=1 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| dashboard | --url --port 36195 | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | 12 Aug 24 10:35 UTC |
| | -p functional-995621 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | functional-995621 ssh sudo | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | |
| | umount -f /mount-9p | | | | | |
| mount | -p functional-995621 | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | |
| | /tmp/TestFunctionalparallelMountCmdVerifyCleanup336789099/001:/mount2 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | functional-995621 ssh findmnt | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | |
| | -T /mount1 | | | | | |
| mount | -p functional-995621 | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | |
| | /tmp/TestFunctionalparallelMountCmdVerifyCleanup336789099/001:/mount3 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| mount | -p functional-995621 | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | |
| | /tmp/TestFunctionalparallelMountCmdVerifyCleanup336789099/001:/mount1 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | functional-995621 ssh findmnt | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | 12 Aug 24 10:35 UTC |
| | -T /mount1 | | | | | |
| ssh | functional-995621 ssh findmnt | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | 12 Aug 24 10:35 UTC |
| | -T /mount2 | | | | | |
| ssh | functional-995621 ssh findmnt | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | 12 Aug 24 10:35 UTC |
| | -T /mount3 | | | | | |
| mount | -p functional-995621 | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | |
| | --kill=true | | | | | |
| ssh | functional-995621 ssh sudo | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | |
| | systemctl is-active docker | | | | | |
| ssh | functional-995621 ssh sudo | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | |
| | systemctl is-active crio | | | | | |
| license | | minikube | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | 12 Aug 24 10:35 UTC |
| ssh | functional-995621 ssh sudo cat | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | 12 Aug 24 10:35 UTC |
| | /etc/test/nested/copy/11045/hosts | | | | | |
| image | functional-995621 image load --daemon | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | 12 Aug 24 10:35 UTC |
| | kicbase/echo-server:functional-995621 | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-995621 image ls | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | 12 Aug 24 10:35 UTC |
| image | functional-995621 image load --daemon | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | 12 Aug 24 10:35 UTC |
| | kicbase/echo-server:functional-995621 | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-995621 image ls | functional-995621 | jenkins | v1.33.1 | 12 Aug 24 10:35 UTC | 12 Aug 24 10:35 UTC |
|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/08/12 10:35:21
Running on machine: ubuntu-20-agent-5
Binary: Built with gc go1.22.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0812 10:35:21.172445 20213 out.go:291] Setting OutFile to fd 1 ...
I0812 10:35:21.172533 20213 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:35:21.172540 20213 out.go:304] Setting ErrFile to fd 2...
I0812 10:35:21.172544 20213 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:35:21.172728 20213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3807/.minikube/bin
I0812 10:35:21.173197 20213 out.go:298] Setting JSON to false
I0812 10:35:21.174123 20213 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1061,"bootTime":1723457860,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0812 10:35:21.174177 20213 start.go:139] virtualization: kvm guest
I0812 10:35:21.176158 20213 out.go:177] * [functional-995621] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0812 10:35:21.177473 20213 out.go:177] - MINIKUBE_LOCATION=19409
I0812 10:35:21.177476 20213 notify.go:220] Checking for updates...
I0812 10:35:21.179576 20213 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0812 10:35:21.180960 20213 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19409-3807/kubeconfig
I0812 10:35:21.182268 20213 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3807/.minikube
I0812 10:35:21.183570 20213 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0812 10:35:21.184929 20213 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0812 10:35:21.186547 20213 config.go:182] Loaded profile config "functional-995621": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0812 10:35:21.186914 20213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0812 10:35:21.186990 20213 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:35:21.201695 20213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45897
I0812 10:35:21.202124 20213 main.go:141] libmachine: () Calling .GetVersion
I0812 10:35:21.202620 20213 main.go:141] libmachine: Using API Version 1
I0812 10:35:21.202640 20213 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:35:21.203028 20213 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:35:21.203218 20213 main.go:141] libmachine: (functional-995621) Calling .DriverName
I0812 10:35:21.203470 20213 driver.go:392] Setting default libvirt URI to qemu:///system
I0812 10:35:21.203752 20213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0812 10:35:21.203781 20213 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:35:21.218585 20213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32957
I0812 10:35:21.218905 20213 main.go:141] libmachine: () Calling .GetVersion
I0812 10:35:21.219359 20213 main.go:141] libmachine: Using API Version 1
I0812 10:35:21.219377 20213 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:35:21.219635 20213 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:35:21.219862 20213 main.go:141] libmachine: (functional-995621) Calling .DriverName
I0812 10:35:21.250768 20213 out.go:177] * Using the kvm2 driver based on existing profile
I0812 10:35:21.252276 20213 start.go:297] selected driver: kvm2
I0812 10:35:21.252294 20213 start.go:901] validating driver "kvm2" against &{Name:functional-995621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-995621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0812 10:35:21.252398 20213 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0812 10:35:21.253360 20213 cni.go:84] Creating CNI manager for ""
I0812 10:35:21.253377 20213 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0812 10:35:21.253419 20213 start.go:340] cluster config:
{Name:functional-995621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-995621 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0812 10:35:21.255010 20213 out.go:177] * dry-run validation complete!
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
7e8d0cb0cdbe5 115053965e86b 5 seconds ago Running dashboard-metrics-scraper 0 194ff1bc2f321 dashboard-metrics-scraper-b5fc48f67-77lnh
9cff42e130ec0 07655ddf2eebe 8 seconds ago Running kubernetes-dashboard 0 d5036c592fd49 kubernetes-dashboard-779776cb65-qslhq
84636aa00a886 56cc512116c8f 22 seconds ago Exited mount-munger 0 ca31d4a6b3d22 busybox-mount
529420e73d1d5 82e4c8a736a4f 26 seconds ago Running echoserver 0 7a7fd0e377709 hello-node-6d85cfcfd8-6qjnz
82d7083546ee0 82e4c8a736a4f 26 seconds ago Running echoserver 0 47679b10ed9f6 hello-node-connect-57b4589c47-n6gcf
0f801d5a3d261 6e38f40d628db 41 seconds ago Running storage-provisioner 3 2832d22bd50e2 storage-provisioner
610e424a02465 6e38f40d628db 53 seconds ago Exited storage-provisioner 2 2832d22bd50e2 storage-provisioner
0c9bb2e980931 1f6d574d502f3 57 seconds ago Running kube-apiserver 0 381277d910f52 kube-apiserver-functional-995621
d63f304f5079d 3861cfcd7c04c 57 seconds ago Running etcd 2 87f0428c4fd08 etcd-functional-995621
84bd993738cab 3edc18e7b7672 57 seconds ago Running kube-scheduler 2 04965e27e643e kube-scheduler-functional-995621
e50daf6e0d797 76932a3b37d7e 57 seconds ago Running kube-controller-manager 3 cd759f87630ec kube-controller-manager-functional-995621
654b27ca9c065 76932a3b37d7e About a minute ago Exited kube-controller-manager 2 cd759f87630ec kube-controller-manager-functional-995621
b97bbf89193c9 3edc18e7b7672 About a minute ago Exited kube-scheduler 1 04965e27e643e kube-scheduler-functional-995621
021dc33a9155a 3861cfcd7c04c About a minute ago Exited etcd 1 87f0428c4fd08 etcd-functional-995621
a2dc930b5a525 55bb025d2cfa5 About a minute ago Running kube-proxy 1 57761f3684b27 kube-proxy-8hxxw
3c9c44131b08b cbb01a7bd410d About a minute ago Running coredns 1 1273334cc407d coredns-7db6d8ff4d-f75c5
f2350a1af02ea cbb01a7bd410d 2 minutes ago Exited coredns 0 1273334cc407d coredns-7db6d8ff4d-f75c5
8acef2c844c28 55bb025d2cfa5 2 minutes ago Exited kube-proxy 0 57761f3684b27 kube-proxy-8hxxw
==> containerd <==
Aug 12 10:35:31 functional-995621 containerd[3749]: time="2024-08-12T10:35:31.653818993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 12 10:35:31 functional-995621 containerd[3749]: time="2024-08-12T10:35:31.654066241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 12 10:35:31 functional-995621 containerd[3749]: time="2024-08-12T10:35:31.738218938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:sp-pod,Uid:78069b05-ab10-418c-8460-b6fa55111fbc,Namespace:default,Attempt:0,} returns sandbox id \"a7074770d405acc8b522cc4dea2d455c8ee2fd1d2b079769f0a5afde2820f232\""
Aug 12 10:35:33 functional-995621 containerd[3749]: time="2024-08-12T10:35:33.383323960Z" level=info msg="ImageCreate event name:\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Aug 12 10:35:33 functional-995621 containerd[3749]: time="2024-08-12T10:35:33.385011584Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=19757297"
Aug 12 10:35:33 functional-995621 containerd[3749]: time="2024-08-12T10:35:33.387268140Z" level=info msg="ImageCreate event name:\"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Aug 12 10:35:33 functional-995621 containerd[3749]: time="2024-08-12T10:35:33.388696202Z" level=info msg="Pulled image \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" with image id \"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7\", repo tag \"\", repo digest \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\", size \"19746404\" in 3.346327562s"
Aug 12 10:35:33 functional-995621 containerd[3749]: time="2024-08-12T10:35:33.388745660Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" returns image reference \"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7\""
Aug 12 10:35:33 functional-995621 containerd[3749]: time="2024-08-12T10:35:33.394148931Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
Aug 12 10:35:33 functional-995621 containerd[3749]: time="2024-08-12T10:35:33.395182433Z" level=info msg="CreateContainer within sandbox \"194ff1bc2f3212bb0c00a82c3947448a382ba8caf188a827a54cf8df5e81448a\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
Aug 12 10:35:33 functional-995621 containerd[3749]: time="2024-08-12T10:35:33.399047188Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Aug 12 10:35:33 functional-995621 containerd[3749]: time="2024-08-12T10:35:33.420256918Z" level=info msg="CreateContainer within sandbox \"194ff1bc2f3212bb0c00a82c3947448a382ba8caf188a827a54cf8df5e81448a\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"7e8d0cb0cdbe542ed2d479cb1a6c395db8366c9c5f0d60ca5c4803ca974774c1\""
Aug 12 10:35:33 functional-995621 containerd[3749]: time="2024-08-12T10:35:33.421014354Z" level=info msg="StartContainer for \"7e8d0cb0cdbe542ed2d479cb1a6c395db8366c9c5f0d60ca5c4803ca974774c1\""
Aug 12 10:35:33 functional-995621 containerd[3749]: time="2024-08-12T10:35:33.489313292Z" level=info msg="StartContainer for \"7e8d0cb0cdbe542ed2d479cb1a6c395db8366c9c5f0d60ca5c4803ca974774c1\" returns successfully"
Aug 12 10:35:34 functional-995621 containerd[3749]: time="2024-08-12T10:35:34.304627878Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Aug 12 10:35:35 functional-995621 containerd[3749]: time="2024-08-12T10:35:35.627189522Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-995621\""
Aug 12 10:35:35 functional-995621 containerd[3749]: time="2024-08-12T10:35:35.664922203Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Aug 12 10:35:35 functional-995621 containerd[3749]: time="2024-08-12T10:35:35.665426285Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-995621\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Aug 12 10:35:36 functional-995621 containerd[3749]: time="2024-08-12T10:35:36.547308574Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-995621\""
Aug 12 10:35:36 functional-995621 containerd[3749]: time="2024-08-12T10:35:36.570767001Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-995621\""
Aug 12 10:35:36 functional-995621 containerd[3749]: time="2024-08-12T10:35:36.593470211Z" level=info msg="ImageDelete event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\""
Aug 12 10:35:36 functional-995621 containerd[3749]: time="2024-08-12T10:35:36.684397096Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-995621\" returns successfully"
Aug 12 10:35:37 functional-995621 containerd[3749]: time="2024-08-12T10:35:37.379702197Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-995621\""
Aug 12 10:35:37 functional-995621 containerd[3749]: time="2024-08-12T10:35:37.384694961Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Aug 12 10:35:37 functional-995621 containerd[3749]: time="2024-08-12T10:35:37.386345176Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-995621\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
==> coredns [3c9c44131b08b1889ede7452da62bb83d45bc8c722c1139b5ba575f9af20cf74] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:40078 - 16155 "HINFO IN 8453809499327683075.9156561792593226435. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015435936s
==> coredns [f2350a1af02ea4ac82776a997274dd775b3592b56d1c7a6dd3e86964f09aa17e] <==
linux/amd64, go1.20.7, ae2bbc2
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: Trace[437857073]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Aug-2024 10:32:56.914) (total time: 30001ms):
Trace[437857073]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:33:26.915)
Trace[437857073]: [30.001093554s] [30.001093554s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: Trace[590379683]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Aug-2024 10:32:56.915) (total time: 30000ms):
Trace[590379683]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:33:26.915)
Trace[590379683]: [30.000813273s] [30.000813273s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: Trace[1885363111]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Aug-2024 10:32:56.913) (total time: 30001ms):
Trace[1885363111]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:33:26.914)
Trace[1885363111]: [30.001585293s] [30.001585293s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
[INFO] Reloading complete
[INFO] 127.0.0.1:44573 - 49799 "HINFO IN 1980577033680817462.8922499370603222849. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008764904s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: functional-995621
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-995621
kubernetes.io/os=linux
minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
minikube.k8s.io/name=functional-995621
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_08_12T10_32_42_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 12 Aug 2024 10:32:39 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-995621
AcquireTime: <unset>
RenewTime: Mon, 12 Aug 2024 10:35:35 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 12 Aug 2024 10:34:44 +0000 Mon, 12 Aug 2024 10:32:37 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 12 Aug 2024 10:34:44 +0000 Mon, 12 Aug 2024 10:32:37 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 12 Aug 2024 10:34:44 +0000 Mon, 12 Aug 2024 10:32:37 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 12 Aug 2024 10:34:44 +0000 Mon, 12 Aug 2024 10:32:42 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.30
Hostname: functional-995621
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
System Info:
Machine ID: 3f9e52dc9908418e8365e9b7159137b2
System UUID: 3f9e52dc-9908-418e-8365-e9b7159137b2
Boot ID: ca541500-6381-4ffc-9ca0-fd697bcd539f
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.20
Kubelet Version: v1.30.3
Kube-Proxy Version: v1.30.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-6d85cfcfd8-6qjnz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29s
default hello-node-connect-57b4589c47-n6gcf 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 30s
default mysql-64454c8b5c-kgcdb 600m (30%!)(MISSING) 700m (35%!)(MISSING) 512Mi (13%!)(MISSING) 700Mi (18%!)(MISSING) 14s
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7s
kube-system coredns-7db6d8ff4d-f75c5 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 2m43s
kube-system etcd-functional-995621 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 2m57s
kube-system kube-apiserver-functional-995621 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 54s
kube-system kube-controller-manager-functional-995621 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m57s
kube-system kube-proxy-8hxxw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m43s
kube-system kube-scheduler-functional-995621 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m57s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m42s
kubernetes-dashboard dashboard-metrics-scraper-b5fc48f67-77lnh 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 16s
kubernetes-dashboard kubernetes-dashboard-779776cb65-qslhq 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 16s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (67%!)(MISSING) 700m (35%!)(MISSING)
memory 682Mi (17%!)(MISSING) 870Mi (22%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m42s kube-proxy
Normal Starting 108s kube-proxy
Normal NodeHasSufficientMemory 2m57s kubelet Node functional-995621 status is now: NodeHasSufficientMemory
Normal Starting 2m57s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 2m57s kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 2m57s kubelet Node functional-995621 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m57s kubelet Node functional-995621 status is now: NodeHasSufficientPID
Normal NodeReady 2m56s kubelet Node functional-995621 status is now: NodeReady
Normal RegisteredNode 2m44s node-controller Node functional-995621 event: Registered Node functional-995621 in Controller
Normal Starting 99s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 99s (x8 over 99s) kubelet Node functional-995621 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 99s (x8 over 99s) kubelet Node functional-995621 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 99s (x7 over 99s) kubelet Node functional-995621 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 99s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 84s node-controller Node functional-995621 event: Registered Node functional-995621 in Controller
Normal Starting 58s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 58s (x8 over 58s) kubelet Node functional-995621 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 58s (x8 over 58s) kubelet Node functional-995621 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 58s (x7 over 58s) kubelet Node functional-995621 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 58s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 41s node-controller Node functional-995621 event: Registered Node functional-995621 in Controller
==> dmesg <==
[ +0.147000] systemd-fstab-generator[2221]: Ignoring "noauto" option for root device
[ +0.295906] systemd-fstab-generator[2250]: Ignoring "noauto" option for root device
[ +1.437975] systemd-fstab-generator[2410]: Ignoring "noauto" option for root device
[ +0.082354] kauditd_printk_skb: 102 callbacks suppressed
[ +5.774562] kauditd_printk_skb: 18 callbacks suppressed
[ +10.379602] kauditd_printk_skb: 21 callbacks suppressed
[ +1.288356] systemd-fstab-generator[3115]: Ignoring "noauto" option for root device
[Aug12 10:34] kauditd_printk_skb: 23 callbacks suppressed
[ +5.763796] systemd-fstab-generator[3311]: Ignoring "noauto" option for root device
[ +12.299557] systemd-fstab-generator[3674]: Ignoring "noauto" option for root device
[ +0.078453] kauditd_printk_skb: 12 callbacks suppressed
[ +0.063402] systemd-fstab-generator[3686]: Ignoring "noauto" option for root device
[ +0.184208] systemd-fstab-generator[3700]: Ignoring "noauto" option for root device
[ +0.150967] systemd-fstab-generator[3712]: Ignoring "noauto" option for root device
[ +0.295536] systemd-fstab-generator[3741]: Ignoring "noauto" option for root device
[ +1.927497] systemd-fstab-generator[3903]: Ignoring "noauto" option for root device
[ +10.885393] kauditd_printk_skb: 125 callbacks suppressed
[ +1.504751] systemd-fstab-generator[4188]: Ignoring "noauto" option for root device
[ +4.336996] kauditd_printk_skb: 41 callbacks suppressed
[ +12.889973] systemd-fstab-generator[4576]: Ignoring "noauto" option for root device
[Aug12 10:35] kauditd_printk_skb: 19 callbacks suppressed
[ +5.162415] kauditd_printk_skb: 19 callbacks suppressed
[ +6.010148] kauditd_printk_skb: 42 callbacks suppressed
[ +7.301451] kauditd_printk_skb: 15 callbacks suppressed
[ +7.667490] kauditd_printk_skb: 37 callbacks suppressed
==> etcd [021dc33a9155a43f1bdc1cf0d4644f249c3a893c4f704dba984fb4f077fcd970] <==
{"level":"info","ts":"2024-08-12T10:33:48.260461Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-08-12T10:33:49.621702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 is starting a new election at term 2"}
{"level":"info","ts":"2024-08-12T10:33:49.62176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 became pre-candidate at term 2"}
{"level":"info","ts":"2024-08-12T10:33:49.621779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 received MsgPreVoteResp from 404c942cebf80710 at term 2"}
{"level":"info","ts":"2024-08-12T10:33:49.621791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 became candidate at term 3"}
{"level":"info","ts":"2024-08-12T10:33:49.621796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 received MsgVoteResp from 404c942cebf80710 at term 3"}
{"level":"info","ts":"2024-08-12T10:33:49.621804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 became leader at term 3"}
{"level":"info","ts":"2024-08-12T10:33:49.621811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 404c942cebf80710 elected leader 404c942cebf80710 at term 3"}
{"level":"info","ts":"2024-08-12T10:33:49.627421Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"404c942cebf80710","local-member-attributes":"{Name:functional-995621 ClientURLs:[https://192.168.39.30:2379]}","request-path":"/0/members/404c942cebf80710/attributes","cluster-id":"ae8b7a508f3fd394","publish-timeout":"7s"}
{"level":"info","ts":"2024-08-12T10:33:49.627452Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-12T10:33:49.627572Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-08-12T10:33:49.627591Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-08-12T10:33:49.627605Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-12T10:33:49.629989Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.30:2379"}
{"level":"info","ts":"2024-08-12T10:33:49.630763Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-08-12T10:34:39.145168Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2024-08-12T10:34:39.145212Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-995621","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.30:2380"],"advertise-client-urls":["https://192.168.39.30:2379"]}
{"level":"warn","ts":"2024-08-12T10:34:39.14531Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2024-08-12T10:34:39.14534Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2024-08-12T10:34:39.146958Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.30:2379: use of closed network connection"}
{"level":"warn","ts":"2024-08-12T10:34:39.147037Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.30:2379: use of closed network connection"}
{"level":"info","ts":"2024-08-12T10:34:39.148447Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"404c942cebf80710","current-leader-member-id":"404c942cebf80710"}
{"level":"info","ts":"2024-08-12T10:34:39.151946Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.30:2380"}
{"level":"info","ts":"2024-08-12T10:34:39.152256Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.30:2380"}
{"level":"info","ts":"2024-08-12T10:34:39.152348Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-995621","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.30:2380"],"advertise-client-urls":["https://192.168.39.30:2379"]}
==> etcd [d63f304f5079d556061e7f3aef4bdecfcc1e8cf6457ccb76c6a4ed665e312740] <==
{"level":"info","ts":"2024-08-12T10:34:43.279926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 received MsgPreVoteResp from 404c942cebf80710 at term 3"}
{"level":"info","ts":"2024-08-12T10:34:43.280005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 became candidate at term 4"}
{"level":"info","ts":"2024-08-12T10:34:43.280182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 received MsgVoteResp from 404c942cebf80710 at term 4"}
{"level":"info","ts":"2024-08-12T10:34:43.280316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 became leader at term 4"}
{"level":"info","ts":"2024-08-12T10:34:43.280455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 404c942cebf80710 elected leader 404c942cebf80710 at term 4"}
{"level":"info","ts":"2024-08-12T10:34:43.290051Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-12T10:34:43.290001Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"404c942cebf80710","local-member-attributes":"{Name:functional-995621 ClientURLs:[https://192.168.39.30:2379]}","request-path":"/0/members/404c942cebf80710/attributes","cluster-id":"ae8b7a508f3fd394","publish-timeout":"7s"}
{"level":"info","ts":"2024-08-12T10:34:43.291482Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-12T10:34:43.291815Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-08-12T10:34:43.291845Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-08-12T10:34:43.292666Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.30:2379"}
{"level":"info","ts":"2024-08-12T10:34:43.293548Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"warn","ts":"2024-08-12T10:35:26.950739Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":509066488389866408,"retry-timeout":"500ms"}
{"level":"info","ts":"2024-08-12T10:35:27.104812Z","caller":"traceutil/trace.go:171","msg":"trace[1482579007] linearizableReadLoop","detail":"{readStateIndex:896; appliedIndex:895; }","duration":"654.943327ms","start":"2024-08-12T10:35:26.449837Z","end":"2024-08-12T10:35:27.10478Z","steps":["trace[1482579007] 'read index received' (duration: 654.81943ms)","trace[1482579007] 'applied index is now lower than readState.Index' (duration: 123.423µs)"],"step_count":2}
{"level":"info","ts":"2024-08-12T10:35:27.104919Z","caller":"traceutil/trace.go:171","msg":"trace[1466929052] transaction","detail":"{read_only:false; response_revision:822; number_of_response:1; }","duration":"669.59899ms","start":"2024-08-12T10:35:26.435314Z","end":"2024-08-12T10:35:27.104913Z","steps":["trace[1466929052] 'process raft request' (duration: 669.385952ms)"],"step_count":1}
{"level":"warn","ts":"2024-08-12T10:35:27.105413Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T10:35:26.435293Z","time spent":"669.644921ms","remote":"127.0.0.1:43360","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:804 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"warn","ts":"2024-08-12T10:35:27.105706Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"655.85967ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:14996"}
{"level":"info","ts":"2024-08-12T10:35:27.105798Z","caller":"traceutil/trace.go:171","msg":"trace[1779155578] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:822; }","duration":"655.979541ms","start":"2024-08-12T10:35:26.449809Z","end":"2024-08-12T10:35:27.105788Z","steps":["trace[1779155578] 'agreement among raft nodes before linearized reading' (duration: 655.76359ms)"],"step_count":1}
{"level":"warn","ts":"2024-08-12T10:35:27.105821Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T10:35:26.449796Z","time spent":"656.018616ms","remote":"127.0.0.1:43372","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":15019,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
{"level":"warn","ts":"2024-08-12T10:35:27.106036Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"556.809809ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:1 size:698"}
{"level":"info","ts":"2024-08-12T10:35:27.106138Z","caller":"traceutil/trace.go:171","msg":"trace[2012578664] range","detail":"{range_begin:/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:822; }","duration":"556.869198ms","start":"2024-08-12T10:35:26.549199Z","end":"2024-08-12T10:35:27.106069Z","steps":["trace[2012578664] 'agreement among raft nodes before linearized reading' (duration: 556.7838ms)"],"step_count":1}
{"level":"warn","ts":"2024-08-12T10:35:27.106176Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T10:35:26.549186Z","time spent":"556.983553ms","remote":"127.0.0.1:43360","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":721,"request content":"key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" "}
{"level":"warn","ts":"2024-08-12T10:35:27.106628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"370.744512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:14996"}
{"level":"info","ts":"2024-08-12T10:35:27.10669Z","caller":"traceutil/trace.go:171","msg":"trace[1763023117] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:822; }","duration":"370.864665ms","start":"2024-08-12T10:35:26.735819Z","end":"2024-08-12T10:35:27.106684Z","steps":["trace[1763023117] 'agreement among raft nodes before linearized reading' (duration: 370.750537ms)"],"step_count":1}
{"level":"warn","ts":"2024-08-12T10:35:27.10671Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T10:35:26.735806Z","time spent":"370.898381ms","remote":"127.0.0.1:43372","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":15019,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
==> kernel <==
10:35:39 up 3 min, 0 users, load average: 1.59, 0.69, 0.27
Linux functional-995621 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [0c9bb2e98093129866e665b18b8ccf269e4309526554194b1b010a9b2d3c9af8] <==
I0812 10:34:45.485034 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W0812 10:34:45.725664 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.30]
I0812 10:34:45.726987 1 controller.go:615] quota admission added evaluator for: endpoints
I0812 10:34:45.732434 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0812 10:34:45.990507 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0812 10:34:46.000256 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0812 10:34:46.040544 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0812 10:34:46.062804 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0812 10:34:46.068498 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0812 10:35:03.880523 1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.88.187"}
I0812 10:35:08.641285 1 controller.go:615] quota admission added evaluator for: replicasets.apps
I0812 10:35:08.759915 1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.74.255"}
I0812 10:35:09.467180 1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.54.27"}
I0812 10:35:22.657078 1 controller.go:615] quota admission added evaluator for: namespaces
I0812 10:35:22.949349 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.231.253"}
I0812 10:35:22.982680 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.226.186"}
I0812 10:35:24.401448 1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.97.245"}
I0812 10:35:27.108060 1 trace.go:236] Trace[500483982]: "List" accept:application/json, */*,audit-id:e79e3bb3-2497-4a14-bc78-ccf3f5fddb97,client:192.168.39.1,api-group:,api-version:v1,name:,subresource:,namespace:default,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/default/pods,user-agent:e2e-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:LIST (12-Aug-2024 10:35:26.449) (total time: 658ms):
Trace[500483982]: ["List(recursive=true) etcd3" audit-id:e79e3bb3-2497-4a14-bc78-ccf3f5fddb97,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: 658ms (10:35:26.449)]
Trace[500483982]: [658.911886ms] [658.911886ms] END
I0812 10:35:27.110206 1 trace.go:236] Trace[818710226]: "Update" accept:application/json, */*,audit-id:e82e17eb-783d-40ac-84f4-e56d5afb561d,client:192.168.39.30,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (12-Aug-2024 10:35:26.433) (total time: 676ms):
Trace[818710226]: ["GuaranteedUpdate etcd3" audit-id:e82e17eb-783d-40ac-84f4-e56d5afb561d,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 676ms (10:35:26.434)
Trace[818710226]: ---"Txn call completed" 675ms (10:35:27.110)]
Trace[818710226]: [676.674338ms] [676.674338ms] END
E0812 10:35:29.870694 1 conn.go:339] Error on socket receive: read tcp 192.168.39.30:8441->192.168.39.1:52316: use of closed network connection
==> kube-controller-manager [654b27ca9c0658cce45894f659ca95037e9af55be24fd2fc6e7458028238114c] <==
I0812 10:34:14.213711 1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I0812 10:34:14.214294 1 shared_informer.go:320] Caches are synced for expand
I0812 10:34:14.214871 1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-995621"
I0812 10:34:14.215051 1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I0812 10:34:14.222578 1 shared_informer.go:320] Caches are synced for HPA
I0812 10:34:14.224825 1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
I0812 10:34:14.231233 1 shared_informer.go:320] Caches are synced for endpoint_slice
I0812 10:34:14.233847 1 shared_informer.go:320] Caches are synced for node
I0812 10:34:14.234053 1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
I0812 10:34:14.234310 1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
I0812 10:34:14.234400 1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
I0812 10:34:14.234667 1 shared_informer.go:320] Caches are synced for cidrallocator
I0812 10:34:14.237480 1 shared_informer.go:320] Caches are synced for taint-eviction-controller
I0812 10:34:14.239975 1 shared_informer.go:320] Caches are synced for disruption
I0812 10:34:14.242521 1 shared_informer.go:320] Caches are synced for stateful set
I0812 10:34:14.281534 1 shared_informer.go:320] Caches are synced for bootstrap_signer
I0812 10:34:14.312272 1 shared_informer.go:320] Caches are synced for crt configmap
I0812 10:34:14.363750 1 shared_informer.go:320] Caches are synced for TTL after finished
I0812 10:34:14.368003 1 shared_informer.go:320] Caches are synced for job
I0812 10:34:14.416569 1 shared_informer.go:320] Caches are synced for cronjob
I0812 10:34:14.433000 1 shared_informer.go:320] Caches are synced for resource quota
I0812 10:34:14.460169 1 shared_informer.go:320] Caches are synced for resource quota
I0812 10:34:14.859748 1 shared_informer.go:320] Caches are synced for garbage collector
I0812 10:34:14.926980 1 shared_informer.go:320] Caches are synced for garbage collector
I0812 10:34:14.927024 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
==> kube-controller-manager [e50daf6e0d797dfe326cb10498a80303a5d3bbbd7fad9ffe22f6b5bb47449f03] <==
E0812 10:35:22.773559 1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0812 10:35:22.795539 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="21.952293ms"
E0812 10:35:22.795819 1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0812 10:35:22.804584 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="50.329749ms"
E0812 10:35:22.804632 1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0812 10:35:22.817560 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="21.322483ms"
E0812 10:35:22.817637 1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0812 10:35:22.833161 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="27.94953ms"
E0812 10:35:22.833397 1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0812 10:35:22.878969 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="61.203076ms"
I0812 10:35:22.879653 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="46.043948ms"
I0812 10:35:22.892537 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="13.373226ms"
I0812 10:35:22.893195 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="84.257µs"
I0812 10:35:22.896369 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="16.663841ms"
I0812 10:35:22.900915 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="39.188µs"
I0812 10:35:22.904693 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="90.784µs"
I0812 10:35:22.928722 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="47.671µs"
I0812 10:35:24.480935 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="30.11796ms"
I0812 10:35:24.490295 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="9.305172ms"
I0812 10:35:24.490464 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="122.69µs"
I0812 10:35:24.493925 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="25.839µs"
I0812 10:35:31.144719 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="17.21683ms"
I0812 10:35:31.146573 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="67.889µs"
I0812 10:35:34.125035 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="11.646866ms"
I0812 10:35:34.125180 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="98.902µs"
==> kube-proxy [8acef2c844c28f68fe1bad5ad3550e7a9f1f39c19e8653580fade92cc41ab3f4] <==
I0812 10:32:56.471836 1 server_linux.go:69] "Using iptables proxy"
I0812 10:32:56.493581 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.30"]
I0812 10:32:56.708666 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
I0812 10:32:56.708735 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0812 10:32:56.708761 1 server_linux.go:165] "Using iptables Proxier"
I0812 10:32:56.714648 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0812 10:32:56.714867 1 server.go:872] "Version info" version="v1.30.3"
I0812 10:32:56.714882 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0812 10:32:56.718042 1 config.go:192] "Starting service config controller"
I0812 10:32:56.718055 1 shared_informer.go:313] Waiting for caches to sync for service config
I0812 10:32:56.718158 1 config.go:101] "Starting endpoint slice config controller"
I0812 10:32:56.718164 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0812 10:32:56.719546 1 config.go:319] "Starting node config controller"
I0812 10:32:56.719580 1 shared_informer.go:313] Waiting for caches to sync for node config
I0812 10:32:56.819253 1 shared_informer.go:320] Caches are synced for service config
I0812 10:32:56.819170 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0812 10:32:56.821267 1 shared_informer.go:320] Caches are synced for node config
==> kube-proxy [a2dc930b5a525bd555c6e294aae0ddbdd9003940c5cf91d2794c13d575d5f4aa] <==
W0812 10:33:49.954700 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-995621&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
E0812 10:33:49.954933 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-995621&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
W0812 10:33:49.955191 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
E0812 10:33:49.955415 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
W0812 10:33:50.902744 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
E0812 10:33:50.902879 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
W0812 10:33:51.242243 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
E0812 10:33:51.242304 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
W0812 10:33:51.467397 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-995621&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
E0812 10:33:51.467458 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-995621&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
W0812 10:33:53.346814 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
E0812 10:33:53.346883 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
W0812 10:33:53.875996 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
E0812 10:33:53.876150 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
W0812 10:33:54.576715 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-995621&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
E0812 10:33:54.576754 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-995621&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
W0812 10:33:58.386396 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
E0812 10:33:58.386445 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
W0812 10:33:58.576028 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-995621&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
E0812 10:33:58.576074 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-995621&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
W0812 10:33:58.678293 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
E0812 10:33:58.678349 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
I0812 10:34:05.453149 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0812 10:34:07.352612 1 shared_informer.go:320] Caches are synced for service config
I0812 10:34:10.854205 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [84bd993738cab85c0035178cbdad3ee00186582fa8b82b6320068455f9f609d3] <==
I0812 10:34:42.059061 1 serving.go:380] Generated self-signed cert in-memory
W0812 10:34:44.530871 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0812 10:34:44.531211 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0812 10:34:44.531249 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
W0812 10:34:44.531415 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0812 10:34:44.588759 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
I0812 10:34:44.588793 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0812 10:34:44.592403 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0812 10:34:44.594971 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0812 10:34:44.595616 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0812 10:34:44.596133 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0812 10:34:44.695934 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [b97bbf89193c97eb04adf96ecaf1c0f567afe48a119d5ee4dccfd49192793d6b] <==
W0812 10:33:58.991582 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.30:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
E0812 10:33:58.991621 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.30:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
W0812 10:33:59.001520 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.30:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
E0812 10:33:59.001549 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.30:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
W0812 10:33:59.096530 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.30:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
E0812 10:33:59.096569 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.30:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
W0812 10:33:59.118220 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.30:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
E0812 10:33:59.118279 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.30:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
W0812 10:33:59.340692 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.30:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
E0812 10:33:59.340763 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.30:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.30:8441: connect: connection refused
W0812 10:34:01.654894 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0812 10:34:01.655362 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0812 10:34:01.655051 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0812 10:34:01.655598 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0812 10:34:01.655308 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0812 10:34:01.655799 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0812 10:34:01.655262 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0812 10:34:01.655918 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0812 10:34:01.686880 1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0812 10:34:01.687037 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0812 10:34:08.794925 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0812 10:34:08.993751 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0812 10:34:11.694288 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
I0812 10:34:39.082602 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0812 10:34:39.082768 1 run.go:74] "command failed" err="finished without leader elect"
==> kubelet <==
Aug 12 10:35:22 functional-995621 kubelet[4195]: I0812 10:35:22.866736 4195 topology_manager.go:215] "Topology Admit Handler" podUID="83d27451-65bf-43bb-99af-7941080f887f" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-qslhq"
Aug 12 10:35:22 functional-995621 kubelet[4195]: I0812 10:35:22.983450 4195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/83d27451-65bf-43bb-99af-7941080f887f-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-qslhq\" (UID: \"83d27451-65bf-43bb-99af-7941080f887f\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-qslhq"
Aug 12 10:35:22 functional-995621 kubelet[4195]: I0812 10:35:22.983517 4195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4wck\" (UniqueName: \"kubernetes.io/projected/5f44d961-61c9-44ba-b673-90ccf11e5ae6-kube-api-access-b4wck\") pod \"dashboard-metrics-scraper-b5fc48f67-77lnh\" (UID: \"5f44d961-61c9-44ba-b673-90ccf11e5ae6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-77lnh"
Aug 12 10:35:22 functional-995621 kubelet[4195]: I0812 10:35:22.983538 4195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmtjv\" (UniqueName: \"kubernetes.io/projected/83d27451-65bf-43bb-99af-7941080f887f-kube-api-access-zmtjv\") pod \"kubernetes-dashboard-779776cb65-qslhq\" (UID: \"83d27451-65bf-43bb-99af-7941080f887f\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-qslhq"
Aug 12 10:35:22 functional-995621 kubelet[4195]: I0812 10:35:22.983561 4195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5f44d961-61c9-44ba-b673-90ccf11e5ae6-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-77lnh\" (UID: \"5f44d961-61c9-44ba-b673-90ccf11e5ae6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-77lnh"
Aug 12 10:35:24 functional-995621 kubelet[4195]: I0812 10:35:24.472333 4195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=3.443843681 podStartE2EDuration="10.472315799s" podCreationTimestamp="2024-08-12 10:35:14 +0000 UTC" firstStartedPulling="2024-08-12 10:35:15.256153705 +0000 UTC m=+34.602318317" lastFinishedPulling="2024-08-12 10:35:22.284625823 +0000 UTC m=+41.630790435" observedRunningTime="2024-08-12 10:35:23.0780581 +0000 UTC m=+42.424222742" watchObservedRunningTime="2024-08-12 10:35:24.472315799 +0000 UTC m=+43.818480512"
Aug 12 10:35:24 functional-995621 kubelet[4195]: I0812 10:35:24.472746 4195 topology_manager.go:215] "Topology Admit Handler" podUID="1c62abb8-0c36-4c04-9f0c-64d37c780faf" podNamespace="default" podName="mysql-64454c8b5c-kgcdb"
Aug 12 10:35:24 functional-995621 kubelet[4195]: I0812 10:35:24.497834 4195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jjdt\" (UniqueName: \"kubernetes.io/projected/1c62abb8-0c36-4c04-9f0c-64d37c780faf-kube-api-access-8jjdt\") pod \"mysql-64454c8b5c-kgcdb\" (UID: \"1c62abb8-0c36-4c04-9f0c-64d37c780faf\") " pod="default/mysql-64454c8b5c-kgcdb"
Aug 12 10:35:30 functional-995621 kubelet[4195]: I0812 10:35:30.538317 4195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxsgz\" (UniqueName: \"kubernetes.io/projected/9da94a19-c588-43ab-80fa-8df28a6b2a90-kube-api-access-fxsgz\") pod \"9da94a19-c588-43ab-80fa-8df28a6b2a90\" (UID: \"9da94a19-c588-43ab-80fa-8df28a6b2a90\") "
Aug 12 10:35:30 functional-995621 kubelet[4195]: I0812 10:35:30.538526 4195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"mypd\" (UniqueName: \"kubernetes.io/host-path/9da94a19-c588-43ab-80fa-8df28a6b2a90-pvc-95072959-4c1e-4162-a85a-662134d8e79c\") pod \"9da94a19-c588-43ab-80fa-8df28a6b2a90\" (UID: \"9da94a19-c588-43ab-80fa-8df28a6b2a90\") "
Aug 12 10:35:30 functional-995621 kubelet[4195]: I0812 10:35:30.538588 4195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9da94a19-c588-43ab-80fa-8df28a6b2a90-pvc-95072959-4c1e-4162-a85a-662134d8e79c" (OuterVolumeSpecName: "mypd") pod "9da94a19-c588-43ab-80fa-8df28a6b2a90" (UID: "9da94a19-c588-43ab-80fa-8df28a6b2a90"). InnerVolumeSpecName "pvc-95072959-4c1e-4162-a85a-662134d8e79c". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Aug 12 10:35:30 functional-995621 kubelet[4195]: I0812 10:35:30.540232 4195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9da94a19-c588-43ab-80fa-8df28a6b2a90-kube-api-access-fxsgz" (OuterVolumeSpecName: "kube-api-access-fxsgz") pod "9da94a19-c588-43ab-80fa-8df28a6b2a90" (UID: "9da94a19-c588-43ab-80fa-8df28a6b2a90"). InnerVolumeSpecName "kube-api-access-fxsgz". PluginName "kubernetes.io/projected", VolumeGidValue ""
Aug 12 10:35:30 functional-995621 kubelet[4195]: I0812 10:35:30.639814 4195 reconciler_common.go:289] "Volume detached for volume \"pvc-95072959-4c1e-4162-a85a-662134d8e79c\" (UniqueName: \"kubernetes.io/host-path/9da94a19-c588-43ab-80fa-8df28a6b2a90-pvc-95072959-4c1e-4162-a85a-662134d8e79c\") on node \"functional-995621\" DevicePath \"\""
Aug 12 10:35:30 functional-995621 kubelet[4195]: I0812 10:35:30.639841 4195 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fxsgz\" (UniqueName: \"kubernetes.io/projected/9da94a19-c588-43ab-80fa-8df28a6b2a90-kube-api-access-fxsgz\") on node \"functional-995621\" DevicePath \"\""
Aug 12 10:35:31 functional-995621 kubelet[4195]: I0812 10:35:31.082278 4195 scope.go:117] "RemoveContainer" containerID="a4acde0faab5b65f1a53a2fe5772d9cf0ba3896e83fccc1c77fd55d23875fb05"
Aug 12 10:35:31 functional-995621 kubelet[4195]: I0812 10:35:31.096075 4195 scope.go:117] "RemoveContainer" containerID="a4acde0faab5b65f1a53a2fe5772d9cf0ba3896e83fccc1c77fd55d23875fb05"
Aug 12 10:35:31 functional-995621 kubelet[4195]: E0812 10:35:31.096469 4195 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4acde0faab5b65f1a53a2fe5772d9cf0ba3896e83fccc1c77fd55d23875fb05\": not found" containerID="a4acde0faab5b65f1a53a2fe5772d9cf0ba3896e83fccc1c77fd55d23875fb05"
Aug 12 10:35:31 functional-995621 kubelet[4195]: I0812 10:35:31.096512 4195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4acde0faab5b65f1a53a2fe5772d9cf0ba3896e83fccc1c77fd55d23875fb05"} err="failed to get container status \"a4acde0faab5b65f1a53a2fe5772d9cf0ba3896e83fccc1c77fd55d23875fb05\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4acde0faab5b65f1a53a2fe5772d9cf0ba3896e83fccc1c77fd55d23875fb05\": not found"
Aug 12 10:35:31 functional-995621 kubelet[4195]: I0812 10:35:31.230733 4195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-qslhq" podStartSLOduration=2.657514179 podStartE2EDuration="9.230713794s" podCreationTimestamp="2024-08-12 10:35:22 +0000 UTC" firstStartedPulling="2024-08-12 10:35:23.466863373 +0000 UTC m=+42.813027996" lastFinishedPulling="2024-08-12 10:35:30.040062976 +0000 UTC m=+49.386227611" observedRunningTime="2024-08-12 10:35:31.126932448 +0000 UTC m=+50.473097080" watchObservedRunningTime="2024-08-12 10:35:31.230713794 +0000 UTC m=+50.576878425"
Aug 12 10:35:31 functional-995621 kubelet[4195]: I0812 10:35:31.230987 4195 topology_manager.go:215] "Topology Admit Handler" podUID="78069b05-ab10-418c-8460-b6fa55111fbc" podNamespace="default" podName="sp-pod"
Aug 12 10:35:31 functional-995621 kubelet[4195]: E0812 10:35:31.231052 4195 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9da94a19-c588-43ab-80fa-8df28a6b2a90" containerName="myfrontend"
Aug 12 10:35:31 functional-995621 kubelet[4195]: I0812 10:35:31.231125 4195 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da94a19-c588-43ab-80fa-8df28a6b2a90" containerName="myfrontend"
Aug 12 10:35:31 functional-995621 kubelet[4195]: I0812 10:35:31.244059 4195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-95072959-4c1e-4162-a85a-662134d8e79c\" (UniqueName: \"kubernetes.io/host-path/78069b05-ab10-418c-8460-b6fa55111fbc-pvc-95072959-4c1e-4162-a85a-662134d8e79c\") pod \"sp-pod\" (UID: \"78069b05-ab10-418c-8460-b6fa55111fbc\") " pod="default/sp-pod"
Aug 12 10:35:31 functional-995621 kubelet[4195]: I0812 10:35:31.244307 4195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brmfm\" (UniqueName: \"kubernetes.io/projected/78069b05-ab10-418c-8460-b6fa55111fbc-kube-api-access-brmfm\") pod \"sp-pod\" (UID: \"78069b05-ab10-418c-8460-b6fa55111fbc\") " pod="default/sp-pod"
Aug 12 10:35:32 functional-995621 kubelet[4195]: I0812 10:35:32.847071 4195 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9da94a19-c588-43ab-80fa-8df28a6b2a90" path="/var/lib/kubelet/pods/9da94a19-c588-43ab-80fa-8df28a6b2a90/volumes"
==> kubernetes-dashboard [9cff42e130ec027ae624f834f6b8ae7b14c61110ad97fb5198e0acfd4ffcf3b7] <==
2024/08/12 10:35:30 Using namespace: kubernetes-dashboard
2024/08/12 10:35:30 Using in-cluster config to connect to apiserver
2024/08/12 10:35:30 Using secret token for csrf signing
2024/08/12 10:35:30 Initializing csrf token from kubernetes-dashboard-csrf secret
2024/08/12 10:35:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2024/08/12 10:35:30 Successful initial request to the apiserver, version: v1.30.3
2024/08/12 10:35:30 Generating JWE encryption key
2024/08/12 10:35:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2024/08/12 10:35:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2024/08/12 10:35:30 Initializing JWE encryption key from synchronized object
2024/08/12 10:35:30 Creating in-cluster Sidecar client
2024/08/12 10:35:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/08/12 10:35:30 Serving insecurely on HTTP port: 9090
2024/08/12 10:35:30 Starting overwatch
==> storage-provisioner [0f801d5a3d261c5fb719440d6d1b6e36f7b983a90a4fa587de6725d34f858f4d] <==
I0812 10:34:56.958544 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0812 10:34:56.965863 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0812 10:34:56.966013 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0812 10:35:14.365354 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0812 10:35:14.365724 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-995621_4ea2d543-8400-4855-ac3e-205907897db1!
I0812 10:35:14.367277 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf54f690-af7e-4062-80b9-b016a69f2810", APIVersion:"v1", ResourceVersion:"707", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-995621_4ea2d543-8400-4855-ac3e-205907897db1 became leader
I0812 10:35:14.466438 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-995621_4ea2d543-8400-4855-ac3e-205907897db1!
I0812 10:35:14.575033 1 controller.go:1332] provision "default/myclaim" class "standard": started
I0812 10:35:14.576513 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"95072959-4c1e-4162-a85a-662134d8e79c", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I0812 10:35:14.575174 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard 20605c45-6ccf-4c92-814c-279f32d2ad81 344 0 2024-08-12 10:32:56 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-12 10:32:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-95072959-4c1e-4162-a85a-662134d8e79c &PersistentVolumeClaim{ObjectMeta:{myclaim default 95072959-4c1e-4162-a85a-662134d8e79c 710 0 2024-08-12 10:35:14 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2024-08-12 10:35:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-12 10:35:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I0812 10:35:14.579465 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-95072959-4c1e-4162-a85a-662134d8e79c" provisioned
I0812 10:35:14.579789 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I0812 10:35:14.580172 1 volume_store.go:212] Trying to save persistentvolume "pvc-95072959-4c1e-4162-a85a-662134d8e79c"
I0812 10:35:14.596213 1 volume_store.go:219] persistentvolume "pvc-95072959-4c1e-4162-a85a-662134d8e79c" saved
I0812 10:35:14.597409 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"95072959-4c1e-4162-a85a-662134d8e79c", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-95072959-4c1e-4162-a85a-662134d8e79c
==> storage-provisioner [610e424a02465c31f26f162626b280b2f83cdea7b5d716d2b57d67f6d793b2b3] <==
I0812 10:34:45.164903 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0812 10:34:45.166580 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-995621 -n functional-995621
helpers_test.go:261: (dbg) Run: kubectl --context functional-995621 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-64454c8b5c-kgcdb sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context functional-995621 describe pod busybox-mount mysql-64454c8b5c-kgcdb sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-995621 describe pod busybox-mount mysql-64454c8b5c-kgcdb sp-pod:
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-995621/192.168.39.30
Start Time: Mon, 12 Aug 2024 10:35:11 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.7
IPs:
IP: 10.244.0.7
Containers:
mount-munger:
Container ID: containerd://84636aa00a886e31a481b7d151a5cb81fe30374786f4d00fba7c0fac1dcc62fb
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 12 Aug 2024 10:35:15 +0000
Finished: Mon, 12 Aug 2024 10:35:15 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nxt7p (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-nxt7p:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 28s default-scheduler Successfully assigned default/busybox-mount to functional-995621
Normal Pulling 28s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 25s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.389s (3.389s including waiting). Image size: 2395207 bytes.
Normal Created 25s kubelet Created container mount-munger
Normal Started 25s kubelet Started container mount-munger
Name: mysql-64454c8b5c-kgcdb
Namespace: default
Priority: 0
Service Account: default
Node: functional-995621/192.168.39.30
Start Time: Mon, 12 Aug 2024 10:35:24 +0000
Labels: app=mysql
pod-template-hash=64454c8b5c
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-64454c8b5c
Containers:
mysql:
Container ID:
Image: docker.io/mysql:5.7
Image ID:
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 700m
memory: 700Mi
Requests:
cpu: 600m
memory: 512Mi
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8jjdt (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-8jjdt:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15s default-scheduler Successfully assigned default/mysql-64454c8b5c-kgcdb to functional-995621
Normal Pulling 16s kubelet Pulling image "docker.io/mysql:5.7"
Name: sp-pod
Namespace: default
Priority: 0
Service Account: default
Node: functional-995621/192.168.39.30
Start Time: Mon, 12 Aug 2024 10:35:31 +0000
Labels: test=storage-provisioner
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
myfrontend:
Container ID:
Image: docker.io/nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-brmfm (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim
ReadOnly: false
kube-api-access-brmfm:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8s default-scheduler Successfully assigned default/sp-pod to functional-995621
Normal Pulling 9s kubelet Pulling image "docker.io/nginx"
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (31.64s)