=== RUN TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect
=== CONT TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run: kubectl --context functional-406825 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run: kubectl --context functional-406825 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-6jtbn" [0a435882-bbd8-4ef6-afbe-d7398712d43b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-6jtbn" [0a435882-bbd8-4ef6-afbe-d7398712d43b] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.005877582s
functional_test.go:1645: (dbg) Run: out/minikube-linux-amd64 -p functional-406825 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.243:31342
functional_test.go:1657: error fetching http://192.168.39.243:31342: Get "http://192.168.39.243:31342": dial tcp 192.168.39.243:31342: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.243:31342: Get "http://192.168.39.243:31342": dial tcp 192.168.39.243:31342: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.243:31342: Get "http://192.168.39.243:31342": dial tcp 192.168.39.243:31342: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.243:31342: Get "http://192.168.39.243:31342": dial tcp 192.168.39.243:31342: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.243:31342: Get "http://192.168.39.243:31342": dial tcp 192.168.39.243:31342: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.243:31342: Get "http://192.168.39.243:31342": dial tcp 192.168.39.243:31342: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.243:31342: Get "http://192.168.39.243:31342": dial tcp 192.168.39.243:31342: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.39.243:31342: Get "http://192.168.39.243:31342": dial tcp 192.168.39.243:31342: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run: kubectl --context functional-406825 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name: hello-node-connect-57b4589c47-6jtbn
Namespace: default
Priority: 0
Service Account: default
Node: functional-406825/192.168.39.243
Start Time: Wed, 31 Jul 2024 19:38:03 +0000
Labels: app=hello-node-connect
pod-template-hash=57b4589c47
Annotations: <none>
Status: Running
IP: 10.244.0.6
IPs:
IP: 10.244.0.6
Controlled By: ReplicaSet/hello-node-connect-57b4589c47
Containers:
echoserver:
Container ID: containerd://79805f790046fbb6971a630e4b001268ed7ff79519a46fc8e02a4d84a36de0ea
Image: registry.k8s.io/echoserver:1.8
Image ID: registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 31 Jul 2024 19:38:05 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r6qpn (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-r6qpn:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 27s default-scheduler Successfully assigned default/hello-node-connect-57b4589c47-6jtbn to functional-406825
Normal Pulling 27s kubelet Pulling image "registry.k8s.io/echoserver:1.8"
Normal Pulled 25s kubelet Successfully pulled image "registry.k8s.io/echoserver:1.8" in 143ms (1.513s including waiting). Image size: 46237695 bytes.
Normal Created 25s kubelet Created container echoserver
Normal Started 25s kubelet Started container echoserver
functional_test.go:1604: (dbg) Run: kubectl --context functional-406825 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run: kubectl --context functional-406825 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name: hello-node-connect
Namespace: default
Labels: app=hello-node-connect
Annotations: <none>
Selector: app=hello-node-connect
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.101.244.63
IPs: 10.101.244.63
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31342/TCP
Endpoints: 10.244.0.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-406825 -n functional-406825
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p functional-406825 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-406825 logs -n 25: (1.714595788s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs:
-- stdout --
==> Audit <==
|----------------|-------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|----------------|-------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| ssh | functional-406825 ssh sudo cat | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| | /etc/ssl/certs/6241492.pem | | | | | |
| ssh | functional-406825 ssh sudo cat | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| | /usr/share/ca-certificates/6241492.pem | | | | | |
| image | functional-406825 image load --daemon | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| | docker.io/kicbase/echo-server:functional-406825 | | | | | |
| | --alsologtostderr | | | | | |
| ssh | functional-406825 ssh sudo cat | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| | /etc/ssl/certs/3ec20f2e.0 | | | | | |
| ssh | functional-406825 ssh sudo cat | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| | /etc/test/nested/copy/624149/hosts | | | | | |
| image | functional-406825 image ls | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| image | functional-406825 image load --daemon | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| | docker.io/kicbase/echo-server:functional-406825 | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-406825 image ls | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| image | functional-406825 image load --daemon | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| | docker.io/kicbase/echo-server:functional-406825 | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-406825 image ls | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| image | functional-406825 image save docker.io/kicbase/echo-server:functional-406825 | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| | /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-406825 image rm | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| | docker.io/kicbase/echo-server:functional-406825 | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-406825 image ls | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| image | functional-406825 image load | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| | /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-406825 image ls | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| image | functional-406825 image save --daemon | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| | docker.io/kicbase/echo-server:functional-406825 | | | | | |
| | --alsologtostderr | | | | | |
| update-context | functional-406825 | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-406825 | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-406825 | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| image | functional-406825 | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| | image ls --format short | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-406825 | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| | image ls --format yaml | | | | | |
| | --alsologtostderr | | | | | |
| ssh | functional-406825 ssh pgrep | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | |
| | buildkitd | | | | | |
| image | functional-406825 image build -t | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| | localhost/my-image:functional-406825 | | | | | |
| | testdata/build --alsologtostderr | | | | | |
| image | functional-406825 image ls | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
| image | functional-406825 | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | |
| | image ls --format json | | | | | |
| | --alsologtostderr | | | | | |
|----------------|-------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/07/31 19:38:03
Running on machine: ubuntu-20-agent-8
Binary: Built with gc go1.22.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0731 19:38:03.039711 631651 out.go:291] Setting OutFile to fd 1 ...
I0731 19:38:03.040048 631651 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:38:03.040060 631651 out.go:304] Setting ErrFile to fd 2...
I0731 19:38:03.040065 631651 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:38:03.040346 631651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-616888/.minikube/bin
I0731 19:38:03.040961 631651 out.go:298] Setting JSON to false
I0731 19:38:03.042446 631651 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":12027,"bootTime":1722442656,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0731 19:38:03.042586 631651 start.go:139] virtualization: kvm guest
I0731 19:38:03.044331 631651 out.go:177] * [functional-406825] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0731 19:38:03.046231 631651 notify.go:220] Checking for updates...
I0731 19:38:03.046277 631651 out.go:177] - MINIKUBE_LOCATION=19355
I0731 19:38:03.047920 631651 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0731 19:38:03.049520 631651 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19355-616888/kubeconfig
I0731 19:38:03.050874 631651 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-616888/.minikube
I0731 19:38:03.052002 631651 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0731 19:38:03.053222 631651 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0731 19:38:03.055064 631651 config.go:182] Loaded profile config "functional-406825": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0731 19:38:03.055594 631651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0731 19:38:03.055643 631651 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:38:03.072556 631651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42589
I0731 19:38:03.073006 631651 main.go:141] libmachine: () Calling .GetVersion
I0731 19:38:03.073572 631651 main.go:141] libmachine: Using API Version 1
I0731 19:38:03.073595 631651 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:38:03.073936 631651 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:38:03.074119 631651 main.go:141] libmachine: (functional-406825) Calling .DriverName
I0731 19:38:03.074369 631651 driver.go:392] Setting default libvirt URI to qemu:///system
I0731 19:38:03.074667 631651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0731 19:38:03.074705 631651 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:38:03.090218 631651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
I0731 19:38:03.090701 631651 main.go:141] libmachine: () Calling .GetVersion
I0731 19:38:03.091249 631651 main.go:141] libmachine: Using API Version 1
I0731 19:38:03.091277 631651 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:38:03.091652 631651 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:38:03.091850 631651 main.go:141] libmachine: (functional-406825) Calling .DriverName
I0731 19:38:03.129855 631651 out.go:177] * Using the kvm2 driver based on existing profile
I0731 19:38:03.131195 631651 start.go:297] selected driver: kvm2
I0731 19:38:03.131215 631651 start.go:901] validating driver "kvm2" against &{Name:functional-406825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-406825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0731 19:38:03.131357 631651 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0731 19:38:03.132528 631651 cni.go:84] Creating CNI manager for ""
I0731 19:38:03.132546 631651 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0731 19:38:03.132596 631651 start.go:340] cluster config:
{Name:functional-406825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-406825 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0731 19:38:03.134658 631651 out.go:177] * dry-run validation complete!
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
d595e113e36db 5107333e08a87 2 seconds ago Running mysql 0 2a15baf358afa mysql-64454c8b5c-w77r4
b83ebda98a07f 6e38f40d628db 12 seconds ago Running storage-provisioner 4 1c87913d6fc6b storage-provisioner
5a4384c31025e 115053965e86b 16 seconds ago Running dashboard-metrics-scraper 0 ffc1c313f021e dashboard-metrics-scraper-b5fc48f67-c7rw5
47999e35dc249 07655ddf2eebe 18 seconds ago Running kubernetes-dashboard 0 cad990844a166 kubernetes-dashboard-779776cb65-8c8zn
faafbb13ccfd5 56cc512116c8f 24 seconds ago Exited mount-munger 0 2097c338ab734 busybox-mount
79805f790046f 82e4c8a736a4f 26 seconds ago Running echoserver 0 88289ed18ce25 hello-node-connect-57b4589c47-6jtbn
a81b2c1c96a23 82e4c8a736a4f 26 seconds ago Running echoserver 0 35a6b1d071927 hello-node-6d85cfcfd8-f9pcn
7115833608482 6e38f40d628db 40 seconds ago Exited storage-provisioner 3 1c87913d6fc6b storage-provisioner
3b6182c0a1e6f 1f6d574d502f3 59 seconds ago Running kube-apiserver 0 0d7ab1a1b1411 kube-apiserver-functional-406825
055cecda9b96c 3861cfcd7c04c 59 seconds ago Running etcd 2 08f7e7048ec17 etcd-functional-406825
b25f85a57e7f2 3edc18e7b7672 59 seconds ago Running kube-scheduler 2 5db15c27519cd kube-scheduler-functional-406825
87b2c2b17b4d2 76932a3b37d7e 59 seconds ago Running kube-controller-manager 3 880c93bbe909e kube-controller-manager-functional-406825
9467ea7805021 76932a3b37d7e About a minute ago Exited kube-controller-manager 2 880c93bbe909e kube-controller-manager-functional-406825
70cdf402796ff 3edc18e7b7672 About a minute ago Exited kube-scheduler 1 5db15c27519cd kube-scheduler-functional-406825
e5a09d5dee13c 3861cfcd7c04c About a minute ago Exited etcd 1 08f7e7048ec17 etcd-functional-406825
1c2bd64fa098a cbb01a7bd410d About a minute ago Running coredns 1 2e2b1c3ae9f56 coredns-7db6d8ff4d-jktmb
56df5f47a4f36 55bb025d2cfa5 About a minute ago Running kube-proxy 1 884e7a12c3459 kube-proxy-drw49
4734c24ce252f cbb01a7bd410d 2 minutes ago Exited coredns 0 2e2b1c3ae9f56 coredns-7db6d8ff4d-jktmb
a4329523b3534 55bb025d2cfa5 2 minutes ago Exited kube-proxy 0 884e7a12c3459 kube-proxy-drw49
==> containerd <==
Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.205106284Z" level=info msg="CreateContainer within sandbox \"2a15baf358afaaf4193be011748f33458e37517342a0c71f1f653e08c0bd6519\" for &ContainerMetadata{Name:mysql,Attempt:0,} returns container id \"d595e113e36dbd2991505a208ad8004ef3e949325cea67c6cab4568154e60e6c\""
Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.207148611Z" level=info msg="StartContainer for \"d595e113e36dbd2991505a208ad8004ef3e949325cea67c6cab4568154e60e6c\""
Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.224217984Z" level=info msg="shim disconnected" id=77a0p1tpspptkl9wcbj51cpk0 namespace=k8s.io
Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.225976606Z" level=warning msg="cleaning up after shim disconnected" id=77a0p1tpspptkl9wcbj51cpk0 namespace=k8s.io
Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.226052020Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.689656065Z" level=info msg="StartContainer for \"d595e113e36dbd2991505a208ad8004ef3e949325cea67c6cab4568154e60e6c\" returns successfully"
Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.930520114Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-406825\""
Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.938167992Z" level=info msg="ImageCreate event name:\"sha256:ca33cbd93a7d78edf7bbc4ba7f5ceaab13402bd5e08d57b6fd628cf608e9d127\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.938894721Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-406825\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.137585672Z" level=info msg="StopPodSandbox for \"47b534f6d1b67bc7df99a169a85bbcfc0348f718bd1b98e2c57b6738aa68da52\""
Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.191324972Z" level=info msg="TearDown network for sandbox \"47b534f6d1b67bc7df99a169a85bbcfc0348f718bd1b98e2c57b6738aa68da52\" successfully"
Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.191372964Z" level=info msg="StopPodSandbox for \"47b534f6d1b67bc7df99a169a85bbcfc0348f718bd1b98e2c57b6738aa68da52\" returns successfully"
Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.192015606Z" level=info msg="RemovePodSandbox for \"47b534f6d1b67bc7df99a169a85bbcfc0348f718bd1b98e2c57b6738aa68da52\""
Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.192073007Z" level=info msg="Forcibly stopping sandbox \"47b534f6d1b67bc7df99a169a85bbcfc0348f718bd1b98e2c57b6738aa68da52\""
Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.217199987Z" level=info msg="TearDown network for sandbox \"47b534f6d1b67bc7df99a169a85bbcfc0348f718bd1b98e2c57b6738aa68da52\" successfully"
Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.228794961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47b534f6d1b67bc7df99a169a85bbcfc0348f718bd1b98e2c57b6738aa68da52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.228901655Z" level=info msg="RemovePodSandbox \"47b534f6d1b67bc7df99a169a85bbcfc0348f718bd1b98e2c57b6738aa68da52\" returns successfully"
Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.229344627Z" level=info msg="StopPodSandbox for \"800de94cff5fa9d776758c5cd752b6ef7fec458c8dc0c6a57d0aba00c46434a3\""
Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.229441237Z" level=info msg="TearDown network for sandbox \"800de94cff5fa9d776758c5cd752b6ef7fec458c8dc0c6a57d0aba00c46434a3\" successfully"
Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.229465615Z" level=info msg="StopPodSandbox for \"800de94cff5fa9d776758c5cd752b6ef7fec458c8dc0c6a57d0aba00c46434a3\" returns successfully"
Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.229783378Z" level=info msg="RemovePodSandbox for \"800de94cff5fa9d776758c5cd752b6ef7fec458c8dc0c6a57d0aba00c46434a3\""
Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.229871285Z" level=info msg="Forcibly stopping sandbox \"800de94cff5fa9d776758c5cd752b6ef7fec458c8dc0c6a57d0aba00c46434a3\""
Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.229924740Z" level=info msg="TearDown network for sandbox \"800de94cff5fa9d776758c5cd752b6ef7fec458c8dc0c6a57d0aba00c46434a3\" successfully"
Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.237274927Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"800de94cff5fa9d776758c5cd752b6ef7fec458c8dc0c6a57d0aba00c46434a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.237467368Z" level=info msg="RemovePodSandbox \"800de94cff5fa9d776758c5cd752b6ef7fec458c8dc0c6a57d0aba00c46434a3\" returns successfully"
==> coredns [1c2bd64fa098a8776a450dc431d22e2857de84147c6670490bc1dd1b534471c1] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:48062 - 63647 "HINFO IN 3992222033566678196.4594852637004116219. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010896725s
==> coredns [4734c24ce252fddbacaa087de98c8b525b4ada0576dce000a59a921b85f327d0] <==
linux/amd64, go1.20.7, ae2bbc2
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: Trace[1846180525]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 19:35:42.098) (total time: 30001ms):
Trace[1846180525]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:36:12.099)
Trace[1846180525]: [30.001394109s] [30.001394109s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: Trace[2057238692]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 19:35:42.098) (total time: 30001ms):
Trace[2057238692]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:36:12.099)
Trace[2057238692]: [30.001108762s] [30.001108762s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: Trace[291807057]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 19:35:42.099) (total time: 30001ms):
Trace[291807057]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:36:12.100)
Trace[291807057]: [30.001301648s] [30.001301648s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
[INFO] Reloading complete
[INFO] 127.0.0.1:49643 - 4586 "HINFO IN 7766713651010527523.4206985503821084800. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009773837s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: functional-406825
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-406825
kubernetes.io/os=linux
minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
minikube.k8s.io/name=functional-406825
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_07_31T19_35_27_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 31 Jul 2024 19:35:24 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-406825
AcquireTime: <unset>
RenewTime: Wed, 31 Jul 2024 19:38:26 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 31 Jul 2024 19:37:35 +0000 Wed, 31 Jul 2024 19:35:22 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 31 Jul 2024 19:37:35 +0000 Wed, 31 Jul 2024 19:35:22 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 31 Jul 2024 19:37:35 +0000 Wed, 31 Jul 2024 19:35:22 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 31 Jul 2024 19:37:35 +0000 Wed, 31 Jul 2024 19:35:27 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.243
Hostname: functional-406825
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
System Info:
Machine ID: 03368050aac543f58b19785ef3108713
System UUID: 03368050-aac5-43f5-8b19-785ef3108713
Boot ID: 181c5881-803f-47ee-9c82-783afad1dc27
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.20
Kubelet Version: v1.30.3
Kube-Proxy Version: v1.30.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-6d85cfcfd8-f9pcn 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 31s
default hello-node-connect-57b4589c47-6jtbn 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29s
default mysql-64454c8b5c-w77r4 600m (30%!)(MISSING) 700m (35%!)(MISSING) 512Mi (13%!)(MISSING) 700Mi (18%!)(MISSING) 14s
kube-system coredns-7db6d8ff4d-jktmb 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 2m52s
kube-system etcd-functional-406825 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 3m5s
kube-system kube-apiserver-functional-406825 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 56s
kube-system kube-controller-manager-functional-406825 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m5s
kube-system kube-proxy-drw49 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m52s
kube-system kube-scheduler-functional-406825 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m5s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m50s
kubernetes-dashboard dashboard-metrics-scraper-b5fc48f67-c7rw5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27s
kubernetes-dashboard kubernetes-dashboard-779776cb65-8c8zn 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 28s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (67%!)(MISSING) 700m (35%!)(MISSING)
memory 682Mi (17%!)(MISSING) 870Mi (22%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m50s kube-proxy
Normal Starting 118s kube-proxy
Normal NodeHasSufficientMemory 3m5s kubelet Node functional-406825 status is now: NodeHasSufficientMemory
Normal Starting 3m5s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 3m5s kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 3m5s kubelet Node functional-406825 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m5s kubelet Node functional-406825 status is now: NodeHasSufficientPID
Normal NodeReady 3m5s kubelet Node functional-406825 status is now: NodeReady
Normal RegisteredNode 2m52s node-controller Node functional-406825 event: Registered Node functional-406825 in Controller
Normal Starting 108s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 108s (x8 over 108s) kubelet Node functional-406825 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 108s (x8 over 108s) kubelet Node functional-406825 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 108s (x7 over 108s) kubelet Node functional-406825 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 108s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 94s node-controller Node functional-406825 event: Registered Node functional-406825 in Controller
Normal Starting 60s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 60s (x8 over 60s) kubelet Node functional-406825 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 60s (x8 over 60s) kubelet Node functional-406825 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 60s (x7 over 60s) kubelet Node functional-406825 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 60s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 45s node-controller Node functional-406825 event: Registered Node functional-406825 in Controller
==> dmesg <==
[ +0.143415] systemd-fstab-generator[2228]: Ignoring "noauto" option for root device
[ +0.306049] systemd-fstab-generator[2257]: Ignoring "noauto" option for root device
[ +1.733015] systemd-fstab-generator[2409]: Ignoring "noauto" option for root device
[ +0.095931] kauditd_printk_skb: 102 callbacks suppressed
[ +5.693164] kauditd_printk_skb: 18 callbacks suppressed
[ +10.351308] kauditd_printk_skb: 21 callbacks suppressed
[ +1.474531] systemd-fstab-generator[3120]: Ignoring "noauto" option for root device
[ +6.945731] kauditd_printk_skb: 23 callbacks suppressed
[Jul31 19:37] systemd-fstab-generator[3318]: Ignoring "noauto" option for root device
[ +13.125870] systemd-fstab-generator[3618]: Ignoring "noauto" option for root device
[ +0.097770] kauditd_printk_skb: 12 callbacks suppressed
[ +0.063034] systemd-fstab-generator[3630]: Ignoring "noauto" option for root device
[ +0.171575] systemd-fstab-generator[3644]: Ignoring "noauto" option for root device
[ +0.144135] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
[ +0.313486] systemd-fstab-generator[3685]: Ignoring "noauto" option for root device
[ +1.147859] systemd-fstab-generator[3848]: Ignoring "noauto" option for root device
[ +10.888321] kauditd_printk_skb: 125 callbacks suppressed
[ +1.370316] systemd-fstab-generator[4132]: Ignoring "noauto" option for root device
[ +4.286509] kauditd_printk_skb: 41 callbacks suppressed
[ +15.449615] systemd-fstab-generator[4477]: Ignoring "noauto" option for root device
[ +5.955737] kauditd_printk_skb: 20 callbacks suppressed
[Jul31 19:38] kauditd_printk_skb: 31 callbacks suppressed
[ +5.983729] kauditd_printk_skb: 64 callbacks suppressed
[ +5.015631] kauditd_printk_skb: 2 callbacks suppressed
[ +5.825164] kauditd_printk_skb: 13 callbacks suppressed
==> etcd [055cecda9b96c4180b7f1f2927cb9a081af2a662fa7558589d69050ca26936b8] <==
{"level":"info","ts":"2024-07-31T19:37:32.957782Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2024-07-31T19:37:32.957872Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2024-07-31T19:37:32.95814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 switched to configuration voters=(5579817544954101747)"}
{"level":"info","ts":"2024-07-31T19:37:32.959908Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c7dcc22c4a571085","local-member-id":"4d6f7e7e767b3ff3","added-peer-id":"4d6f7e7e767b3ff3","added-peer-peer-urls":["https://192.168.39.243:2380"]}
{"level":"info","ts":"2024-07-31T19:37:32.960049Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c7dcc22c4a571085","local-member-id":"4d6f7e7e767b3ff3","cluster-version":"3.5"}
{"level":"info","ts":"2024-07-31T19:37:32.960103Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-07-31T19:37:32.968612Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2024-07-31T19:37:32.969259Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.243:2380"}
{"level":"info","ts":"2024-07-31T19:37:32.969442Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.243:2380"}
{"level":"info","ts":"2024-07-31T19:37:32.969908Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4d6f7e7e767b3ff3","initial-advertise-peer-urls":["https://192.168.39.243:2380"],"listen-peer-urls":["https://192.168.39.243:2380"],"advertise-client-urls":["https://192.168.39.243:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.243:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-07-31T19:37:32.971653Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-07-31T19:37:34.024576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 is starting a new election at term 3"}
{"level":"info","ts":"2024-07-31T19:37:34.024632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became pre-candidate at term 3"}
{"level":"info","ts":"2024-07-31T19:37:34.024671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgPreVoteResp from 4d6f7e7e767b3ff3 at term 3"}
{"level":"info","ts":"2024-07-31T19:37:34.024682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became candidate at term 4"}
{"level":"info","ts":"2024-07-31T19:37:34.024687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgVoteResp from 4d6f7e7e767b3ff3 at term 4"}
{"level":"info","ts":"2024-07-31T19:37:34.024696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became leader at term 4"}
{"level":"info","ts":"2024-07-31T19:37:34.024711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4d6f7e7e767b3ff3 elected leader 4d6f7e7e767b3ff3 at term 4"}
{"level":"info","ts":"2024-07-31T19:37:34.030148Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4d6f7e7e767b3ff3","local-member-attributes":"{Name:functional-406825 ClientURLs:[https://192.168.39.243:2379]}","request-path":"/0/members/4d6f7e7e767b3ff3/attributes","cluster-id":"c7dcc22c4a571085","publish-timeout":"7s"}
{"level":"info","ts":"2024-07-31T19:37:34.030201Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-07-31T19:37:34.030514Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-07-31T19:37:34.032784Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-07-31T19:37:34.032986Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-07-31T19:37:34.03346Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.243:2379"}
{"level":"info","ts":"2024-07-31T19:37:34.035338Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
==> etcd [e5a09d5dee13cd4329bb354ac57ebf3d25435aa03f27b2e513b7835c15be9ecf] <==
{"level":"info","ts":"2024-07-31T19:36:32.778743Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.243:2380"}
{"level":"info","ts":"2024-07-31T19:36:33.755879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 is starting a new election at term 2"}
{"level":"info","ts":"2024-07-31T19:36:33.755961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became pre-candidate at term 2"}
{"level":"info","ts":"2024-07-31T19:36:33.756002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgPreVoteResp from 4d6f7e7e767b3ff3 at term 2"}
{"level":"info","ts":"2024-07-31T19:36:33.756151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became candidate at term 3"}
{"level":"info","ts":"2024-07-31T19:36:33.756178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgVoteResp from 4d6f7e7e767b3ff3 at term 3"}
{"level":"info","ts":"2024-07-31T19:36:33.756295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became leader at term 3"}
{"level":"info","ts":"2024-07-31T19:36:33.756319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4d6f7e7e767b3ff3 elected leader 4d6f7e7e767b3ff3 at term 3"}
{"level":"info","ts":"2024-07-31T19:36:33.764183Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4d6f7e7e767b3ff3","local-member-attributes":"{Name:functional-406825 ClientURLs:[https://192.168.39.243:2379]}","request-path":"/0/members/4d6f7e7e767b3ff3/attributes","cluster-id":"c7dcc22c4a571085","publish-timeout":"7s"}
{"level":"info","ts":"2024-07-31T19:36:33.764436Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-07-31T19:36:33.765016Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-07-31T19:36:33.767011Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-07-31T19:36:33.767043Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-07-31T19:36:33.770692Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-07-31T19:36:33.789032Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.243:2379"}
{"level":"info","ts":"2024-07-31T19:37:30.583778Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2024-07-31T19:37:30.583956Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-406825","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.243:2380"],"advertise-client-urls":["https://192.168.39.243:2379"]}
{"level":"warn","ts":"2024-07-31T19:37:30.584048Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2024-07-31T19:37:30.584079Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2024-07-31T19:37:30.58579Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.243:2379: use of closed network connection"}
{"level":"warn","ts":"2024-07-31T19:37:30.585859Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.243:2379: use of closed network connection"}
{"level":"info","ts":"2024-07-31T19:37:30.585902Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4d6f7e7e767b3ff3","current-leader-member-id":"4d6f7e7e767b3ff3"}
{"level":"info","ts":"2024-07-31T19:37:30.589075Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.243:2380"}
{"level":"info","ts":"2024-07-31T19:37:30.589179Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.243:2380"}
{"level":"info","ts":"2024-07-31T19:37:30.589188Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-406825","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.243:2380"],"advertise-client-urls":["https://192.168.39.243:2379"]}
==> kernel <==
19:38:32 up 3 min, 0 users, load average: 1.69, 0.65, 0.25
Linux functional-406825 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [3b6182c0a1e6f18fd53ab4dbf00d5335cd376fab510c2dbf2cd0300582f35c73] <==
I0731 19:37:35.319423 1 apf_controller.go:379] Running API Priority and Fairness config worker
I0731 19:37:35.319609 1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
I0731 19:37:35.326320 1 shared_informer.go:320] Caches are synced for crd-autoregister
I0731 19:37:35.326543 1 aggregator.go:165] initial CRD sync complete...
I0731 19:37:35.326702 1 autoregister_controller.go:141] Starting autoregister controller
I0731 19:37:35.326749 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0731 19:37:35.326880 1 cache.go:39] Caches are synced for autoregister controller
I0731 19:37:35.351987 1 shared_informer.go:320] Caches are synced for node_authorizer
I0731 19:37:36.202316 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W0731 19:37:36.536083 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.243]
I0731 19:37:36.538680 1 controller.go:615] quota admission added evaluator for: endpoints
I0731 19:37:36.546303 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0731 19:37:36.807457 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0731 19:37:36.821430 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0731 19:37:36.869479 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0731 19:37:36.894048 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0731 19:37:36.902295 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0731 19:37:57.533065 1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.123.44"}
I0731 19:38:01.387630 1 controller.go:615] quota admission added evaluator for: replicasets.apps
I0731 19:38:01.498554 1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.44.172"}
I0731 19:38:03.335777 1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.244.63"}
I0731 19:38:04.725156 1 controller.go:615] quota admission added evaluator for: namespaces
I0731 19:38:05.098790 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.199.87"}
I0731 19:38:05.157054 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.144.204"}
I0731 19:38:17.988325 1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.172.2"}
==> kube-controller-manager [87b2c2b17b4d24855f522d4e55dae3fed9e8133c1e5abe3a3d7c261cc642c399] <==
I0731 19:38:04.933190 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="11.479608ms"
E0731 19:38:04.933241 1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0731 19:38:05.006545 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="32.761528ms"
I0731 19:38:05.044518 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="37.912257ms"
I0731 19:38:05.044617 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="33.204µs"
I0731 19:38:05.045314 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="75.08µs"
I0731 19:38:05.053135 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="47.519565ms"
I0731 19:38:05.145524 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="92.214742ms"
I0731 19:38:05.145873 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="316.928µs"
I0731 19:38:05.146272 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="180.512µs"
I0731 19:38:05.151519 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="75.917µs"
I0731 19:38:06.317927 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="14.30351ms"
I0731 19:38:06.318057 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="66.692µs"
I0731 19:38:06.333111 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="11.421925ms"
I0731 19:38:06.333184 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="32.869µs"
I0731 19:38:14.351058 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="10.05001ms"
I0731 19:38:14.351606 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="44.982µs"
I0731 19:38:16.354176 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="9.557563ms"
I0731 19:38:16.354939 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="52.834µs"
I0731 19:38:18.076537 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="33.02873ms"
I0731 19:38:18.105325 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="28.748776ms"
I0731 19:38:18.105388 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="37.105µs"
I0731 19:38:18.117247 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="34.335µs"
I0731 19:38:31.793165 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="61.910259ms"
I0731 19:38:31.793928 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="115.383µs"
==> kube-controller-manager [9467ea7805021bd3313c4a56b8d5ebf71f859a118f403985642acea447540c90] <==
I0731 19:36:58.809158 1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
I0731 19:36:58.809165 1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
I0731 19:36:58.809511 1 shared_informer.go:320] Caches are synced for cidrallocator
I0731 19:36:58.811992 1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
I0731 19:36:58.813465 1 shared_informer.go:320] Caches are synced for ReplicationController
I0731 19:36:58.819637 1 shared_informer.go:320] Caches are synced for taint-eviction-controller
I0731 19:36:58.824664 1 shared_informer.go:320] Caches are synced for namespace
I0731 19:36:58.830627 1 shared_informer.go:320] Caches are synced for persistent volume
I0731 19:36:58.831885 1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
I0731 19:36:58.836246 1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
I0731 19:36:58.839763 1 shared_informer.go:320] Caches are synced for TTL
I0731 19:36:58.840985 1 shared_informer.go:320] Caches are synced for PVC protection
I0731 19:36:58.848242 1 shared_informer.go:320] Caches are synced for ReplicaSet
I0731 19:36:58.848489 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="188.184µs"
I0731 19:36:58.865778 1 shared_informer.go:320] Caches are synced for daemon sets
I0731 19:36:58.883192 1 shared_informer.go:320] Caches are synced for disruption
I0731 19:36:58.890691 1 shared_informer.go:320] Caches are synced for stateful set
I0731 19:36:58.933146 1 shared_informer.go:320] Caches are synced for resource quota
I0731 19:36:58.966281 1 shared_informer.go:320] Caches are synced for resource quota
I0731 19:36:58.970653 1 shared_informer.go:320] Caches are synced for deployment
I0731 19:36:58.984922 1 shared_informer.go:320] Caches are synced for attach detach
I0731 19:36:59.017962 1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
I0731 19:36:59.450979 1 shared_informer.go:320] Caches are synced for garbage collector
I0731 19:36:59.464326 1 shared_informer.go:320] Caches are synced for garbage collector
I0731 19:36:59.464361 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
==> kube-proxy [56df5f47a4f36d8cda6aaecfceaa9d39680ceca6ad8a5ae362a55e7382716bb7] <==
E0731 19:36:34.230114 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
W0731 19:36:34.230178 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
E0731 19:36:34.230220 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
E0731 19:36:34.230311 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-406825&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
W0731 19:36:35.105776 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-406825&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
E0731 19:36:35.105962 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-406825&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
W0731 19:36:35.474691 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
E0731 19:36:35.474771 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
W0731 19:36:35.679566 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
E0731 19:36:35.679639 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
W0731 19:36:37.232167 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-406825&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
E0731 19:36:37.232235 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-406825&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
W0731 19:36:37.801891 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
E0731 19:36:37.802005 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
W0731 19:36:38.382467 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
E0731 19:36:38.382521 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
W0731 19:36:41.924123 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
E0731 19:36:41.924204 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
W0731 19:36:42.151317 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-406825&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
E0731 19:36:42.151430 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-406825&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
W0731 19:36:42.691662 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
E0731 19:36:42.691703 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
I0731 19:36:49.328798 1 shared_informer.go:320] Caches are synced for service config
I0731 19:36:50.728678 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0731 19:36:53.829723 1 shared_informer.go:320] Caches are synced for node config
==> kube-proxy [a4329523b353453860a827922993f8e3da55a645e509f571832c764f2383e96e] <==
I0731 19:35:41.559443 1 server_linux.go:69] "Using iptables proxy"
I0731 19:35:41.572403 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.243"]
I0731 19:35:41.691837 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
I0731 19:35:41.691907 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0731 19:35:41.691926 1 server_linux.go:165] "Using iptables Proxier"
I0731 19:35:41.717252 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0731 19:35:41.717412 1 server.go:872] "Version info" version="v1.30.3"
I0731 19:35:41.717421 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0731 19:35:41.753163 1 config.go:192] "Starting service config controller"
I0731 19:35:41.753190 1 shared_informer.go:313] Waiting for caches to sync for service config
I0731 19:35:41.753218 1 config.go:101] "Starting endpoint slice config controller"
I0731 19:35:41.753222 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0731 19:35:41.753927 1 config.go:319] "Starting node config controller"
I0731 19:35:41.753949 1 shared_informer.go:313] Waiting for caches to sync for node config
I0731 19:35:41.853799 1 shared_informer.go:320] Caches are synced for service config
I0731 19:35:41.853986 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0731 19:35:41.854404 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [70cdf402796ffad8d7224e1865f68eb70fe3b2bc991e265c287d04634256b221] <==
I0731 19:36:34.135999 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0731 19:36:34.144007 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0731 19:36:34.236987 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
I0731 19:36:34.244460 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0731 19:36:34.244519 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0731 19:36:46.526165 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
E0731 19:36:46.527927 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
E0731 19:36:46.528031 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
E0731 19:36:46.528096 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)
E0731 19:36:46.528146 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)
E0731 19:36:46.528179 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
E0731 19:36:46.528234 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
E0731 19:36:46.528293 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
E0731 19:36:46.528338 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)
E0731 19:36:46.528394 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
E0731 19:36:46.528452 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
E0731 19:36:46.528575 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)
E0731 19:36:46.528650 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
E0731 19:36:46.531049 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
E0731 19:36:46.535066 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)
E0731 19:36:46.535216 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)
E0731 19:36:46.535326 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)
I0731 19:37:30.527968 1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
I0731 19:37:30.528050 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0731 19:37:30.528174 1 run.go:74] "command failed" err="finished without leader elect"
==> kube-scheduler [b25f85a57e7f235a82c44cc6d4957430cbf9b17d56b28cca1a4d1a359828c205] <==
I0731 19:37:33.566550 1 serving.go:380] Generated self-signed cert in-memory
W0731 19:37:35.243307 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0731 19:37:35.243352 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0731 19:37:35.243362 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
W0731 19:37:35.243369 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0731 19:37:35.288342 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
I0731 19:37:35.288375 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0731 19:37:35.292976 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0731 19:37:35.293011 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0731 19:37:35.296275 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0731 19:37:35.296341 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0731 19:37:35.393249 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jul 31 19:38:05 functional-406825 kubelet[4139]: I0731 19:38:05.115555 4139 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbqmf\" (UniqueName: \"kubernetes.io/projected/c4b6b90a-911b-40bb-82ce-5bd8a541e541-kube-api-access-lbqmf\") pod \"dashboard-metrics-scraper-b5fc48f67-c7rw5\" (UID: \"c4b6b90a-911b-40bb-82ce-5bd8a541e541\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-c7rw5"
Jul 31 19:38:06 functional-406825 kubelet[4139]: I0731 19:38:06.319510 4139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-connect-57b4589c47-6jtbn" podStartSLOduration=1.805543084 podStartE2EDuration="3.319493251s" podCreationTimestamp="2024-07-31 19:38:03 +0000 UTC" firstStartedPulling="2024-07-31 19:38:03.890509195 +0000 UTC m=+31.933084810" lastFinishedPulling="2024-07-31 19:38:05.404459362 +0000 UTC m=+33.447034977" observedRunningTime="2024-07-31 19:38:06.302588697 +0000 UTC m=+34.345164332" watchObservedRunningTime="2024-07-31 19:38:06.319493251 +0000 UTC m=+34.362068922"
Jul 31 19:38:08 functional-406825 kubelet[4139]: I0731 19:38:08.101708 4139 scope.go:117] "RemoveContainer" containerID="71158336084828f2bdf770c1218462303e6331f6a4cead6203ab3da979315261"
Jul 31 19:38:08 functional-406825 kubelet[4139]: E0731 19:38:08.101951 4139 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(973a1136-6225-4187-9281-07f81c5f86bc)\"" pod="kube-system/storage-provisioner" podUID="973a1136-6225-4187-9281-07f81c5f86bc"
Jul 31 19:38:08 functional-406825 kubelet[4139]: I0731 19:38:08.117402 4139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-6d85cfcfd8-f9pcn" podStartSLOduration=3.835114344 podStartE2EDuration="7.117385798s" podCreationTimestamp="2024-07-31 19:38:01 +0000 UTC" firstStartedPulling="2024-07-31 19:38:01.978227926 +0000 UTC m=+30.020803555" lastFinishedPulling="2024-07-31 19:38:05.260499385 +0000 UTC m=+33.303075009" observedRunningTime="2024-07-31 19:38:06.32055812 +0000 UTC m=+34.363133747" watchObservedRunningTime="2024-07-31 19:38:08.117385798 +0000 UTC m=+36.159961433"
Jul 31 19:38:09 functional-406825 kubelet[4139]: I0731 19:38:09.549163 4139 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2tpv\" (UniqueName: \"kubernetes.io/projected/b155e1cc-6736-497c-8687-7094e35b8f3c-kube-api-access-w2tpv\") pod \"b155e1cc-6736-497c-8687-7094e35b8f3c\" (UID: \"b155e1cc-6736-497c-8687-7094e35b8f3c\") "
Jul 31 19:38:09 functional-406825 kubelet[4139]: I0731 19:38:09.549229 4139 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/b155e1cc-6736-497c-8687-7094e35b8f3c-test-volume\") pod \"b155e1cc-6736-497c-8687-7094e35b8f3c\" (UID: \"b155e1cc-6736-497c-8687-7094e35b8f3c\") "
Jul 31 19:38:09 functional-406825 kubelet[4139]: I0731 19:38:09.549332 4139 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b155e1cc-6736-497c-8687-7094e35b8f3c-test-volume" (OuterVolumeSpecName: "test-volume") pod "b155e1cc-6736-497c-8687-7094e35b8f3c" (UID: "b155e1cc-6736-497c-8687-7094e35b8f3c"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jul 31 19:38:09 functional-406825 kubelet[4139]: I0731 19:38:09.556793 4139 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b155e1cc-6736-497c-8687-7094e35b8f3c-kube-api-access-w2tpv" (OuterVolumeSpecName: "kube-api-access-w2tpv") pod "b155e1cc-6736-497c-8687-7094e35b8f3c" (UID: "b155e1cc-6736-497c-8687-7094e35b8f3c"). InnerVolumeSpecName "kube-api-access-w2tpv". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jul 31 19:38:09 functional-406825 kubelet[4139]: I0731 19:38:09.654794 4139 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w2tpv\" (UniqueName: \"kubernetes.io/projected/b155e1cc-6736-497c-8687-7094e35b8f3c-kube-api-access-w2tpv\") on node \"functional-406825\" DevicePath \"\""
Jul 31 19:38:09 functional-406825 kubelet[4139]: I0731 19:38:09.654876 4139 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/b155e1cc-6736-497c-8687-7094e35b8f3c-test-volume\") on node \"functional-406825\" DevicePath \"\""
Jul 31 19:38:10 functional-406825 kubelet[4139]: I0731 19:38:10.313656 4139 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2097c338ab734b7ecd72810e162a308801482b8a706dd1826b4c8ba1ac3705f2"
Jul 31 19:38:16 functional-406825 kubelet[4139]: I0731 19:38:16.343457 4139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-8c8zn" podStartSLOduration=4.620877292 podStartE2EDuration="12.34344185s" podCreationTimestamp="2024-07-31 19:38:04 +0000 UTC" firstStartedPulling="2024-07-31 19:38:05.686735725 +0000 UTC m=+33.729311350" lastFinishedPulling="2024-07-31 19:38:13.409300292 +0000 UTC m=+41.451875908" observedRunningTime="2024-07-31 19:38:14.338580877 +0000 UTC m=+42.381156512" watchObservedRunningTime="2024-07-31 19:38:16.34344185 +0000 UTC m=+44.386017482"
Jul 31 19:38:18 functional-406825 kubelet[4139]: I0731 19:38:18.076493 4139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-c7rw5" podStartSLOduration=2.847365579 podStartE2EDuration="13.076464909s" podCreationTimestamp="2024-07-31 19:38:05 +0000 UTC" firstStartedPulling="2024-07-31 19:38:05.72202145 +0000 UTC m=+33.764597065" lastFinishedPulling="2024-07-31 19:38:15.951120778 +0000 UTC m=+43.993696395" observedRunningTime="2024-07-31 19:38:16.346052879 +0000 UTC m=+44.388628514" watchObservedRunningTime="2024-07-31 19:38:18.076464909 +0000 UTC m=+46.119040541"
Jul 31 19:38:18 functional-406825 kubelet[4139]: I0731 19:38:18.076730 4139 topology_manager.go:215] "Topology Admit Handler" podUID="33fc257c-36ce-4d7d-a555-802a3b48cba3" podNamespace="default" podName="mysql-64454c8b5c-w77r4"
Jul 31 19:38:18 functional-406825 kubelet[4139]: E0731 19:38:18.076857 4139 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b155e1cc-6736-497c-8687-7094e35b8f3c" containerName="mount-munger"
Jul 31 19:38:18 functional-406825 kubelet[4139]: I0731 19:38:18.076893 4139 memory_manager.go:354] "RemoveStaleState removing state" podUID="b155e1cc-6736-497c-8687-7094e35b8f3c" containerName="mount-munger"
Jul 31 19:38:18 functional-406825 kubelet[4139]: I0731 19:38:18.215396 4139 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckrn2\" (UniqueName: \"kubernetes.io/projected/33fc257c-36ce-4d7d-a555-802a3b48cba3-kube-api-access-ckrn2\") pod \"mysql-64454c8b5c-w77r4\" (UID: \"33fc257c-36ce-4d7d-a555-802a3b48cba3\") " pod="default/mysql-64454c8b5c-w77r4"
Jul 31 19:38:20 functional-406825 kubelet[4139]: I0731 19:38:20.101484 4139 scope.go:117] "RemoveContainer" containerID="71158336084828f2bdf770c1218462303e6331f6a4cead6203ab3da979315261"
Jul 31 19:38:31 functional-406825 kubelet[4139]: I0731 19:38:31.722436 4139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/mysql-64454c8b5c-w77r4" podStartSLOduration=2.5177842569999997 podStartE2EDuration="13.72242039s" podCreationTimestamp="2024-07-31 19:38:18 +0000 UTC" firstStartedPulling="2024-07-31 19:38:18.893102629 +0000 UTC m=+46.935678257" lastFinishedPulling="2024-07-31 19:38:30.097738774 +0000 UTC m=+58.140314390" observedRunningTime="2024-07-31 19:38:31.714548437 +0000 UTC m=+59.757124073" watchObservedRunningTime="2024-07-31 19:38:31.72242039 +0000 UTC m=+59.764996025"
Jul 31 19:38:32 functional-406825 kubelet[4139]: E0731 19:38:32.158101 4139 iptables.go:577] "Could not set up iptables canary" err=<
Jul 31 19:38:32 functional-406825 kubelet[4139]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Jul 31 19:38:32 functional-406825 kubelet[4139]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Jul 31 19:38:32 functional-406825 kubelet[4139]: Perhaps ip6tables or your kernel needs to be upgraded.
Jul 31 19:38:32 functional-406825 kubelet[4139]: > table="nat" chain="KUBE-KUBELET-CANARY"
==> kubernetes-dashboard [47999e35dc24915a773e1185fd92cc8588af0b1b0505a9091c99c7bd86e7c4e3] <==
2024/07/31 19:38:13 Starting overwatch
2024/07/31 19:38:13 Using namespace: kubernetes-dashboard
2024/07/31 19:38:13 Using in-cluster config to connect to apiserver
2024/07/31 19:38:13 Using secret token for csrf signing
2024/07/31 19:38:13 Initializing csrf token from kubernetes-dashboard-csrf secret
2024/07/31 19:38:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2024/07/31 19:38:13 Successful initial request to the apiserver, version: v1.30.3
2024/07/31 19:38:13 Generating JWE encryption key
2024/07/31 19:38:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2024/07/31 19:38:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2024/07/31 19:38:14 Initializing JWE encryption key from synchronized object
2024/07/31 19:38:14 Creating in-cluster Sidecar client
2024/07/31 19:38:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/31 19:38:14 Serving insecurely on HTTP port: 9090
==> storage-provisioner [71158336084828f2bdf770c1218462303e6331f6a4cead6203ab3da979315261] <==
I0731 19:37:52.260263 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0731 19:37:52.262748 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
==> storage-provisioner [b83ebda98a07f0ea988f240d06c292fd6fe8800582e555fd805c17b20b74e7a8] <==
I0731 19:38:20.214911 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0731 19:38:20.224630 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0731 19:38:20.225690 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-406825 -n functional-406825
helpers_test.go:261: (dbg) Run: kubectl --context functional-406825 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context functional-406825 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-406825 describe pod busybox-mount:
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-406825/192.168.39.243
Start Time: Wed, 31 Jul 2024 19:38:03 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.7
IPs:
IP: 10.244.0.7
Containers:
mount-munger:
Container ID: containerd://faafbb13ccfd582f2820e418ac00dc70d545b21f4283fa15e4c7eac0608f4656
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 31 Jul 2024 19:38:07 +0000
Finished: Wed, 31 Jul 2024 19:38:07 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w2tpv (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-w2tpv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30s default-scheduler Successfully assigned default/busybox-mount to functional-406825
Normal Pulling 29s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 26s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.182s (3.351s including waiting). Image size: 2395207 bytes.
Normal Created 26s kubelet Created container mount-munger
Normal Started 26s kubelet Started container mount-munger
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (30.39s)