=== RUN TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect
=== CONT TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run: kubectl --context functional-366561 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run: kubectl --context functional-366561 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-4gbv4" [5a5ace5c-ef6a-46be-b5e8-9e5e77bd917a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-4gbv4" [5a5ace5c-ef6a-46be-b5e8-9e5e77bd917a] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.006650235s
functional_test.go:1645: (dbg) Run: out/minikube-linux-amd64 -p functional-366561 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.13:31904
functional_test.go:1657: error fetching http://192.168.39.13:31904: Get "http://192.168.39.13:31904": dial tcp 192.168.39.13:31904: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.13:31904: Get "http://192.168.39.13:31904": dial tcp 192.168.39.13:31904: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.13:31904: Get "http://192.168.39.13:31904": dial tcp 192.168.39.13:31904: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.13:31904: Get "http://192.168.39.13:31904": dial tcp 192.168.39.13:31904: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.13:31904: Get "http://192.168.39.13:31904": dial tcp 192.168.39.13:31904: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.13:31904: Get "http://192.168.39.13:31904": dial tcp 192.168.39.13:31904: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.13:31904: Get "http://192.168.39.13:31904": dial tcp 192.168.39.13:31904: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.39.13:31904: Get "http://192.168.39.13:31904": dial tcp 192.168.39.13:31904: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run: kubectl --context functional-366561 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name: hello-node-connect-57b4589c47-4gbv4
Namespace: default
Priority: 0
Service Account: default
Node: functional-366561/192.168.39.13
Start Time: Wed, 17 Apr 2024 18:04:31 +0000
Labels: app=hello-node-connect
pod-template-hash=57b4589c47
Annotations: <none>
Status: Running
IP: 10.244.0.5
IPs:
IP: 10.244.0.5
Controlled By: ReplicaSet/hello-node-connect-57b4589c47
Containers:
echoserver:
Container ID: containerd://7da89b68f9a4f24d12e722e10201aa252f0291f25ddbcba78a9478cb89374d26
Image: registry.k8s.io/echoserver:1.8
Image ID: registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 17 Apr 2024 18:04:34 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8wj92 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-8wj92:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30s default-scheduler Successfully assigned default/hello-node-connect-57b4589c47-4gbv4 to functional-366561
Normal Pulling 30s kubelet Pulling image "registry.k8s.io/echoserver:1.8"
Normal Pulled 27s kubelet Successfully pulled image "registry.k8s.io/echoserver:1.8" in 2.939s (2.939s including waiting). Image size: 46237695 bytes.
Normal Created 27s kubelet Created container echoserver
Normal Started 27s kubelet Started container echoserver
functional_test.go:1604: (dbg) Run: kubectl --context functional-366561 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run: kubectl --context functional-366561 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name: hello-node-connect
Namespace: default
Labels: app=hello-node-connect
Annotations: <none>
Selector: app=hello-node-connect
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.164.202
IPs: 10.100.164.202
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31904/TCP
Endpoints: 10.244.0.5:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-366561 -n functional-366561
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p functional-366561 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-366561 logs -n 25: (2.063662913s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------|-------------------|---------|----------------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------|-------------------|---------|----------------|---------------------|---------------------|
| cp | functional-366561 cp | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
| | functional-366561:/home/docker/cp-test.txt | | | | | |
| | /tmp/TestFunctionalparallelCpCmd1307465259/001/cp-test.txt | | | | | |
| ssh | functional-366561 ssh -n | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
| | functional-366561 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | functional-366561 cp | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
| | testdata/cp-test.txt | | | | | |
| | /tmp/does/not/exist/cp-test.txt | | | | | |
| ssh | functional-366561 ssh -n | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
| | functional-366561 sudo cat | | | | | |
| | /tmp/does/not/exist/cp-test.txt | | | | | |
| addons | functional-366561 addons list | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
| addons | functional-366561 addons list | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
| | -o json | | | | | |
| service | functional-366561 service | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
| | hello-node-connect --url | | | | | |
| service | functional-366561 service list | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
| service | functional-366561 service list | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
| | -o json | | | | | |
| service | functional-366561 service | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
| | --namespace=default --https | | | | | |
| | --url hello-node | | | | | |
| service | functional-366561 | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
| | service hello-node --url | | | | | |
| | --format={{.IP}} | | | | | |
| service | functional-366561 service | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
| | hello-node --url | | | | | |
| mount | -p functional-366561 | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | |
| | /tmp/TestFunctionalparallelMountCmdany-port2068520610/001:/mount-9p | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | functional-366561 ssh findmnt | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | |
| | -T /mount-9p | grep 9p | | | | | |
| ssh | functional-366561 ssh findmnt | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
| | -T /mount-9p | grep 9p | | | | | |
| ssh | functional-366561 ssh -- ls | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
| | -la /mount-9p | | | | | |
| ssh | functional-366561 ssh cat | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
| | /mount-9p/test-1713377095812280804 | | | | | |
| license | | minikube | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
| start | -p functional-366561 | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p functional-366561 | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:05 UTC | |
| | --dry-run --alsologtostderr | | | | | |
| | -v=1 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | functional-366561 ssh sudo | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:05 UTC | |
| | systemctl is-active docker | | | | | |
| ssh | functional-366561 ssh sudo | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:05 UTC | |
| | systemctl is-active crio | | | | | |
| start | -p functional-366561 | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:05 UTC | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | functional-366561 ssh stat | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:05 UTC | 17 Apr 24 18:05 UTC |
| | /mount-9p/created-by-test | | | | | |
| ssh | functional-366561 ssh stat | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:05 UTC | |
| | /mount-9p/created-by-pod | | | | | |
|---------|---------------------------------------------------------------------|-------------------|---------|----------------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/04/17 18:05:00
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.22.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0417 18:05:00.689214 89148 out.go:291] Setting OutFile to fd 1 ...
I0417 18:05:00.689343 89148 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:05:00.689354 89148 out.go:304] Setting ErrFile to fd 2...
I0417 18:05:00.689360 89148 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:05:00.689665 89148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75265/.minikube/bin
I0417 18:05:00.690202 89148 out.go:298] Setting JSON to false
I0417 18:05:00.691132 89148 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6451,"bootTime":1713370650,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0417 18:05:00.691200 89148 start.go:139] virtualization: kvm guest
I0417 18:05:00.693382 89148 out.go:177] * [functional-366561] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
I0417 18:05:00.694880 89148 out.go:177] - MINIKUBE_LOCATION=18665
I0417 18:05:00.694907 89148 notify.go:220] Checking for updates...
I0417 18:05:00.696326 89148 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0417 18:05:00.697790 89148 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/18665-75265/kubeconfig
I0417 18:05:00.699251 89148 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75265/.minikube
I0417 18:05:00.700506 89148 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0417 18:05:00.701793 89148 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0417 18:05:00.703419 89148 config.go:182] Loaded profile config "functional-366561": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
I0417 18:05:00.703871 89148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0417 18:05:00.703930 89148 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:05:00.718769 89148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
I0417 18:05:00.719242 89148 main.go:141] libmachine: () Calling .GetVersion
I0417 18:05:00.719762 89148 main.go:141] libmachine: Using API Version 1
I0417 18:05:00.719785 89148 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:05:00.720162 89148 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:05:00.720372 89148 main.go:141] libmachine: (functional-366561) Calling .DriverName
I0417 18:05:00.720634 89148 driver.go:392] Setting default libvirt URI to qemu:///system
I0417 18:05:00.721029 89148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0417 18:05:00.721072 89148 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:05:00.735828 89148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42441
I0417 18:05:00.736205 89148 main.go:141] libmachine: () Calling .GetVersion
I0417 18:05:00.736707 89148 main.go:141] libmachine: Using API Version 1
I0417 18:05:00.736726 89148 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:05:00.737042 89148 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:05:00.737233 89148 main.go:141] libmachine: (functional-366561) Calling .DriverName
I0417 18:05:00.768327 89148 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
I0417 18:05:00.769638 89148 start.go:297] selected driver: kvm2
I0417 18:05:00.769653 89148 start.go:901] validating driver "kvm2" against &{Name:functional-366561 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0-rc.2 ClusterName:functional-366561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8441 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0417 18:05:00.769802 89148 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0417 18:05:00.772031 89148 out.go:177]
W0417 18:05:00.773262 89148 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
I0417 18:05:00.774528 89148 out.go:177]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
cf80d93b99aa7 c613f16b66424 2 seconds ago Running myfrontend 0 d6c2da8642e5e sp-pod
f6a079fd72a2a 56cc512116c8f 3 seconds ago Exited mount-munger 0 f3e7fdb0dbcb6 busybox-mount
c12debe9bcc1c 82e4c8a736a4f 17 seconds ago Running echoserver 0 e89e47745f23f hello-node-6d85cfcfd8-cwvgp
a399c3912b4f8 5107333e08a87 17 seconds ago Running mysql 0 5f6602d579413 mysql-64454c8b5c-jfzlw
7da89b68f9a4f 82e4c8a736a4f 27 seconds ago Running echoserver 0 c45df9fa26793 hello-node-connect-57b4589c47-4gbv4
27bc1ca9864b2 6e38f40d628db 43 seconds ago Running storage-provisioner 3 c6d64a6988634 storage-provisioner
ea3a7a71a7dc1 6e38f40d628db 59 seconds ago Exited storage-provisioner 2 c6d64a6988634 storage-provisioner
6859c5e8477da 65a750108e0b6 About a minute ago Running kube-apiserver 0 d0ddb1fb67c87 kube-apiserver-functional-366561
1bcd935868e0d ae2ef7918948c About a minute ago Running kube-controller-manager 3 1b96e43363af4 kube-controller-manager-functional-366561
256991d3743fa 461015b94df4b About a minute ago Running kube-scheduler 2 ec7b0ae5de905 kube-scheduler-functional-366561
aa47091bafb13 ae2ef7918948c About a minute ago Exited kube-controller-manager 2 1b96e43363af4 kube-controller-manager-functional-366561
0e641817cefff 461015b94df4b About a minute ago Exited kube-scheduler 1 ec7b0ae5de905 kube-scheduler-functional-366561
8c58f866c71d9 cbb01a7bd410d About a minute ago Running coredns 1 f683f86cf1596 coredns-7db6d8ff4d-g75wp
d237d1a584d9c 35c7fe5cdbee5 About a minute ago Running kube-proxy 1 c82db30a1c5cf kube-proxy-m26jt
10806836be08d 3861cfcd7c04c About a minute ago Running etcd 1 e428dc86d96ad etcd-functional-366561
41492958d0bb2 cbb01a7bd410d 2 minutes ago Running coredns 1 806d43c9858ed coredns-7db6d8ff4d-vzg4j
cdd0c69e76004 cbb01a7bd410d 2 minutes ago Exited coredns 0 806d43c9858ed coredns-7db6d8ff4d-vzg4j
b48d805ab66a5 cbb01a7bd410d 2 minutes ago Exited coredns 0 f683f86cf1596 coredns-7db6d8ff4d-g75wp
52a94f7581927 35c7fe5cdbee5 2 minutes ago Exited kube-proxy 0 c82db30a1c5cf kube-proxy-m26jt
478ddb0e91382 3861cfcd7c04c 2 minutes ago Exited etcd 0 e428dc86d96ad etcd-functional-366561
==> containerd <==
Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.291374904Z" level=info msg="PullImage \"docker.io/nginx:latest\""
Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.293498095Z" level=info msg="CreateContainer within sandbox \"f3e7fdb0dbcb6297d728449e3c43a6e474d573e183bce1f877adc31c533c4c27\" for container &ContainerMetadata{Name:mount-munger,Attempt:0,}"
Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.297572019Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.334044750Z" level=info msg="CreateContainer within sandbox \"f3e7fdb0dbcb6297d728449e3c43a6e474d573e183bce1f877adc31c533c4c27\" for &ContainerMetadata{Name:mount-munger,Attempt:0,} returns container id \"f6a079fd72a2a5d710a189fa7ec9d3ed6d6d5025f686c32e695f14a11e5fbb42\""
Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.335904721Z" level=info msg="StartContainer for \"f6a079fd72a2a5d710a189fa7ec9d3ed6d6d5025f686c32e695f14a11e5fbb42\""
Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.401329303Z" level=info msg="StartContainer for \"f6a079fd72a2a5d710a189fa7ec9d3ed6d6d5025f686c32e695f14a11e5fbb42\" returns successfully"
Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.461696089Z" level=info msg="shim disconnected" id=f6a079fd72a2a5d710a189fa7ec9d3ed6d6d5025f686c32e695f14a11e5fbb42 namespace=k8s.io
Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.462066511Z" level=warning msg="cleaning up after shim disconnected" id=f6a079fd72a2a5d710a189fa7ec9d3ed6d6d5025f686c32e695f14a11e5fbb42 namespace=k8s.io
Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.462218589Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.263631778Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.291422499Z" level=info msg="ImageUpdate event name:\"docker.io/library/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.293449623Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=5407"
Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.319754385Z" level=info msg="Pulled image \"docker.io/nginx:latest\" with image id \"sha256:c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b\", repo tag \"docker.io/library/nginx:latest\", repo digest \"docker.io/library/nginx@sha256:9ff236ed47fe39cf1f0acf349d0e5137f8b8a6fd0b46e5117a401010e56222e1\", size \"70542235\" in 1.027813098s"
Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.319987267Z" level=info msg="PullImage \"docker.io/nginx:latest\" returns image reference \"sha256:c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b\""
Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.323015874Z" level=info msg="CreateContainer within sandbox \"d6c2da8642e5e2b1ef173cb1371b53e777d1afabe34130a5fa36a0bb76cff97b\" for container &ContainerMetadata{Name:myfrontend,Attempt:0,}"
Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.351371680Z" level=info msg="CreateContainer within sandbox \"d6c2da8642e5e2b1ef173cb1371b53e777d1afabe34130a5fa36a0bb76cff97b\" for &ContainerMetadata{Name:myfrontend,Attempt:0,} returns container id \"cf80d93b99aa7de33d81db882857f594cc3355657250d49a1534b623e2a74cc1\""
Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.353430544Z" level=info msg="StartContainer for \"cf80d93b99aa7de33d81db882857f594cc3355657250d49a1534b623e2a74cc1\""
Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.439815819Z" level=info msg="StartContainer for \"cf80d93b99aa7de33d81db882857f594cc3355657250d49a1534b623e2a74cc1\" returns successfully"
Apr 17 18:05:01 functional-366561 containerd[3601]: time="2024-04-17T18:05:01.104985839Z" level=info msg="StopPodSandbox for \"f3e7fdb0dbcb6297d728449e3c43a6e474d573e183bce1f877adc31c533c4c27\""
Apr 17 18:05:01 functional-366561 containerd[3601]: time="2024-04-17T18:05:01.105168651Z" level=info msg="Container to stop \"f6a079fd72a2a5d710a189fa7ec9d3ed6d6d5025f686c32e695f14a11e5fbb42\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Apr 17 18:05:01 functional-366561 containerd[3601]: time="2024-04-17T18:05:01.173825088Z" level=info msg="shim disconnected" id=f3e7fdb0dbcb6297d728449e3c43a6e474d573e183bce1f877adc31c533c4c27 namespace=k8s.io
Apr 17 18:05:01 functional-366561 containerd[3601]: time="2024-04-17T18:05:01.173974730Z" level=warning msg="cleaning up after shim disconnected" id=f3e7fdb0dbcb6297d728449e3c43a6e474d573e183bce1f877adc31c533c4c27 namespace=k8s.io
Apr 17 18:05:01 functional-366561 containerd[3601]: time="2024-04-17T18:05:01.173987480Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Apr 17 18:05:01 functional-366561 containerd[3601]: time="2024-04-17T18:05:01.273465758Z" level=info msg="TearDown network for sandbox \"f3e7fdb0dbcb6297d728449e3c43a6e474d573e183bce1f877adc31c533c4c27\" successfully"
Apr 17 18:05:01 functional-366561 containerd[3601]: time="2024-04-17T18:05:01.273729903Z" level=info msg="StopPodSandbox for \"f3e7fdb0dbcb6297d728449e3c43a6e474d573e183bce1f877adc31c533c4c27\" returns successfully"
==> coredns [41492958d0bb25af9d3f69965a5868f447c8f1659ccc50f502acb303a3531d0f] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:54102 - 54224 "HINFO IN 4142035494611935772.3629735222420940855. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019738241s
==> coredns [8c58f866c71d9c979b393430e87bccb0a4ecc1a2bea03541387ed7f87b3fe6ba] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:38187 - 24528 "HINFO IN 3452692630638164530.2872914596714558966. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020678086s
==> coredns [b48d805ab66a54a98b61e4ba2e72441964ca0d6271db325d141e908a5124462c] <==
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:35119 - 59750 "HINFO IN 1414768696273920048.3614774816264106578. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020575146s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/kubernetes: Trace[1950557791]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 18:02:45.810) (total time: 11326ms):
Trace[1950557791]: [11.326277794s] [11.326277794s] END
[INFO] plugin/kubernetes: Trace[1143333661]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 18:02:45.810) (total time: 11326ms):
Trace[1143333661]: [11.326569777s] [11.326569777s] END
[INFO] plugin/kubernetes: Trace[1964501983]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 18:02:45.807) (total time: 11328ms):
Trace[1964501983]: [11.328788041s] [11.328788041s] END
==> coredns [cdd0c69e760048777c6d66a275566376e7f9d5a41c70de7572976f351a2b409d] <==
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:51527 - 65297 "HINFO IN 4082310492503323168.6429197130025581421. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019449222s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: functional-366561
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-366561
kubernetes.io/os=linux
minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
minikube.k8s.io/name=functional-366561
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_04_17T18_02_31_0700
minikube.k8s.io/version=v1.33.0-beta.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 17 Apr 2024 18:02:28 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-366561
AcquireTime: <unset>
RenewTime: Wed, 17 Apr 2024 18:04:53 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 17 Apr 2024 18:04:02 +0000 Wed, 17 Apr 2024 18:02:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 17 Apr 2024 18:04:02 +0000 Wed, 17 Apr 2024 18:02:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 17 Apr 2024 18:04:02 +0000 Wed, 17 Apr 2024 18:02:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 17 Apr 2024 18:04:02 +0000 Wed, 17 Apr 2024 18:02:31 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.13
Hostname: functional-366561
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
System Info:
Machine ID: 823279e9c69549e6bc8c5c18729a8d0c
System UUID: 823279e9-c695-49e6-bc8c-5c18729a8d0c
Boot ID: 0b41b295-6710-4402-83cc-72001fb7e443
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.15
Kubelet Version: v1.30.0-rc.2
Kube-Proxy Version: v1.30.0-rc.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-6d85cfcfd8-cwvgp 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29s
default hello-node-connect-57b4589c47-4gbv4 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 31s
default mysql-64454c8b5c-jfzlw 600m (30%!)(MISSING) 700m (35%!)(MISSING) 512Mi (13%!)(MISSING) 700Mi (18%!)(MISSING) 31s
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4s
kube-system coredns-7db6d8ff4d-g75wp 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 2m18s
kube-system coredns-7db6d8ff4d-vzg4j 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 2m18s
kube-system etcd-functional-366561 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 2m32s
kube-system kube-apiserver-functional-366561 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 60s
kube-system kube-controller-manager-functional-366561 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m31s
kube-system kube-proxy-m26jt 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m18s
kube-system kube-scheduler-functional-366561 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m32s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m18s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1450m (72%!)(MISSING) 700m (35%!)(MISSING)
memory 752Mi (19%!)(MISSING) 1040Mi (27%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 117s kube-proxy
Normal Starting 2m17s kube-proxy
Normal Starting 2m38s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m38s (x8 over 2m38s) kubelet Node functional-366561 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m38s (x8 over 2m38s) kubelet Node functional-366561 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m38s (x7 over 2m38s) kubelet Node functional-366561 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m38s kubelet Updated Node Allocatable limit across pods
Normal Starting 2m32s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 2m31s kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 2m31s kubelet Node functional-366561 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m31s kubelet Node functional-366561 status is now: NodeHasSufficientPID
Normal NodeReady 2m31s kubelet Node functional-366561 status is now: NodeReady
Normal NodeHasSufficientMemory 2m31s kubelet Node functional-366561 status is now: NodeHasSufficientMemory
Normal RegisteredNode 2m19s node-controller Node functional-366561 event: Registered Node functional-366561 in Controller
Normal Starting 109s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 108s (x8 over 108s) kubelet Node functional-366561 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 108s (x8 over 108s) kubelet Node functional-366561 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 108s (x7 over 108s) kubelet Node functional-366561 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 108s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 93s node-controller Node functional-366561 event: Registered Node functional-366561 in Controller
Normal Starting 64s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 64s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 63s (x8 over 64s) kubelet Node functional-366561 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 63s (x8 over 64s) kubelet Node functional-366561 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 63s (x7 over 64s) kubelet Node functional-366561 status is now: NodeHasSufficientPID
Normal RegisteredNode 47s node-controller Node functional-366561 event: Registered Node functional-366561 in Controller
==> dmesg <==
[ +0.212221] systemd-fstab-generator[2105]: Ignoring "noauto" option for root device
[ +0.164837] systemd-fstab-generator[2117]: Ignoring "noauto" option for root device
[ +0.374263] systemd-fstab-generator[2146]: Ignoring "noauto" option for root device
[ +0.087498] kauditd_printk_skb: 188 callbacks suppressed
[ +1.667032] systemd-fstab-generator[2324]: Ignoring "noauto" option for root device
[ +6.015072] kauditd_printk_skb: 40 callbacks suppressed
[Apr17 18:03] kauditd_printk_skb: 21 callbacks suppressed
[ +1.175121] systemd-fstab-generator[3045]: Ignoring "noauto" option for root device
[ +10.994563] kauditd_printk_skb: 23 callbacks suppressed
[ +5.328915] systemd-fstab-generator[3223]: Ignoring "noauto" option for root device
[ +13.663375] systemd-fstab-generator[3526]: Ignoring "noauto" option for root device
[ +0.098141] kauditd_printk_skb: 12 callbacks suppressed
[ +0.073058] systemd-fstab-generator[3538]: Ignoring "noauto" option for root device
[ +0.195498] systemd-fstab-generator[3552]: Ignoring "noauto" option for root device
[ +0.169121] systemd-fstab-generator[3564]: Ignoring "noauto" option for root device
[ +0.357903] systemd-fstab-generator[3593]: Ignoring "noauto" option for root device
[ +1.164096] systemd-fstab-generator[3764]: Ignoring "noauto" option for root device
[ +11.028153] kauditd_printk_skb: 125 callbacks suppressed
[ +1.672117] systemd-fstab-generator[4027]: Ignoring "noauto" option for root device
[Apr17 18:04] kauditd_printk_skb: 35 callbacks suppressed
[ +16.529580] systemd-fstab-generator[4382]: Ignoring "noauto" option for root device
[ +7.172942] kauditd_printk_skb: 19 callbacks suppressed
[ +5.026848] kauditd_printk_skb: 19 callbacks suppressed
[ +10.510919] kauditd_printk_skb: 37 callbacks suppressed
[ +8.661178] kauditd_printk_skb: 16 callbacks suppressed
==> etcd [10806836be08d43ee757223b9498da7730088ad4d0be8de31221496d6b100de1] <==
{"level":"info","ts":"2024-04-17T18:03:05.146035Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-04-17T18:03:05.147164Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.13:2379"}
{"level":"info","ts":"2024-04-17T18:04:41.64658Z","caller":"traceutil/trace.go:171","msg":"trace[1211228161] linearizableReadLoop","detail":"{readStateIndex:808; appliedIndex:807; }","duration":"107.339551ms","start":"2024-04-17T18:04:41.539206Z","end":"2024-04-17T18:04:41.646545Z","steps":["trace[1211228161] 'read index received' (duration: 107.221906ms)","trace[1211228161] 'applied index is now lower than readState.Index' (duration: 117.261µs)"],"step_count":2}
{"level":"warn","ts":"2024-04-17T18:04:41.646958Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.611309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/\" range_end:\"/registry/services/specs/default0\" ","response":"range_response_count:4 size:2637"}
{"level":"info","ts":"2024-04-17T18:04:41.647031Z","caller":"traceutil/trace.go:171","msg":"trace[1252540934] range","detail":"{range_begin:/registry/services/specs/default/; range_end:/registry/services/specs/default0; response_count:4; response_revision:743; }","duration":"107.840705ms","start":"2024-04-17T18:04:41.539182Z","end":"2024-04-17T18:04:41.647022Z","steps":["trace[1252540934] 'agreement among raft nodes before linearized reading' (duration: 107.539447ms)"],"step_count":1}
{"level":"info","ts":"2024-04-17T18:04:41.64721Z","caller":"traceutil/trace.go:171","msg":"trace[1644345999] transaction","detail":"{read_only:false; response_revision:743; number_of_response:1; }","duration":"220.146131ms","start":"2024-04-17T18:04:41.427057Z","end":"2024-04-17T18:04:41.647203Z","steps":["trace[1644345999] 'process raft request' (duration: 219.410114ms)"],"step_count":1}
{"level":"info","ts":"2024-04-17T18:04:43.435445Z","caller":"traceutil/trace.go:171","msg":"trace[630185727] transaction","detail":"{read_only:false; response_revision:749; number_of_response:1; }","duration":"474.473599ms","start":"2024-04-17T18:04:42.960955Z","end":"2024-04-17T18:04:43.435429Z","steps":["trace[630185727] 'process raft request' (duration: 474.351733ms)"],"step_count":1}
{"level":"warn","ts":"2024-04-17T18:04:43.436778Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T18:04:42.960937Z","time spent":"474.603673ms","remote":"127.0.0.1:47936","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-bx6bj5iks4bhnv22tlivd722f4\" mod_revision:701 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-bx6bj5iks4bhnv22tlivd722f4\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-bx6bj5iks4bhnv22tlivd722f4\" > >"}
{"level":"info","ts":"2024-04-17T18:04:43.437734Z","caller":"traceutil/trace.go:171","msg":"trace[749517454] linearizableReadLoop","detail":"{readStateIndex:815; appliedIndex:814; }","duration":"144.611352ms","start":"2024-04-17T18:04:43.29311Z","end":"2024-04-17T18:04:43.437721Z","steps":["trace[749517454] 'read index received' (duration: 144.487295ms)","trace[749517454] 'applied index is now lower than readState.Index' (duration: 123.522µs)"],"step_count":2}
{"level":"info","ts":"2024-04-17T18:04:43.439574Z","caller":"traceutil/trace.go:171","msg":"trace[844471637] transaction","detail":"{read_only:false; response_revision:750; number_of_response:1; }","duration":"259.865378ms","start":"2024-04-17T18:04:43.179695Z","end":"2024-04-17T18:04:43.439561Z","steps":["trace[844471637] 'process raft request' (duration: 257.963711ms)"],"step_count":1}
{"level":"warn","ts":"2024-04-17T18:04:43.440501Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.376668ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:11574"}
{"level":"info","ts":"2024-04-17T18:04:43.440582Z","caller":"traceutil/trace.go:171","msg":"trace[944766252] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:750; }","duration":"147.48364ms","start":"2024-04-17T18:04:43.293087Z","end":"2024-04-17T18:04:43.440571Z","steps":["trace[944766252] 'agreement among raft nodes before linearized reading' (duration: 147.30161ms)"],"step_count":1}
{"level":"warn","ts":"2024-04-17T18:04:43.441022Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.70795ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-04-17T18:04:43.441093Z","caller":"traceutil/trace.go:171","msg":"trace[301249660] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:750; }","duration":"139.82738ms","start":"2024-04-17T18:04:43.301255Z","end":"2024-04-17T18:04:43.441083Z","steps":["trace[301249660] 'agreement among raft nodes before linearized reading' (duration: 139.73386ms)"],"step_count":1}
{"level":"warn","ts":"2024-04-17T18:04:43.441457Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.121984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.13\" ","response":"range_response_count:1 size:133"}
{"level":"info","ts":"2024-04-17T18:04:43.44153Z","caller":"traceutil/trace.go:171","msg":"trace[404705593] range","detail":"{range_begin:/registry/masterleases/192.168.39.13; range_end:; response_count:1; response_revision:750; }","duration":"102.224844ms","start":"2024-04-17T18:04:43.339294Z","end":"2024-04-17T18:04:43.441519Z","steps":["trace[404705593] 'agreement among raft nodes before linearized reading' (duration: 102.09332ms)"],"step_count":1}
{"level":"info","ts":"2024-04-17T18:04:56.99051Z","caller":"traceutil/trace.go:171","msg":"trace[1739360161] linearizableReadLoop","detail":"{readStateIndex:849; appliedIndex:848; }","duration":"373.760741ms","start":"2024-04-17T18:04:56.616731Z","end":"2024-04-17T18:04:56.990492Z","steps":["trace[1739360161] 'read index received' (duration: 373.545193ms)","trace[1739360161] 'applied index is now lower than readState.Index' (duration: 214.695µs)"],"step_count":2}
{"level":"info","ts":"2024-04-17T18:04:56.990949Z","caller":"traceutil/trace.go:171","msg":"trace[1709261119] transaction","detail":"{read_only:false; response_revision:781; number_of_response:1; }","duration":"455.437141ms","start":"2024-04-17T18:04:56.535393Z","end":"2024-04-17T18:04:56.99083Z","steps":["trace[1709261119] 'process raft request' (duration: 454.937013ms)"],"step_count":1}
{"level":"warn","ts":"2024-04-17T18:04:56.991324Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T18:04:56.53538Z","time spent":"455.785924ms","remote":"127.0.0.1:47828","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:780 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"warn","ts":"2024-04-17T18:04:56.992388Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"375.651535ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/mysql-64454c8b5c-jfzlw\" ","response":"range_response_count:1 size:3147"}
{"level":"info","ts":"2024-04-17T18:04:56.992775Z","caller":"traceutil/trace.go:171","msg":"trace[640641266] range","detail":"{range_begin:/registry/pods/default/mysql-64454c8b5c-jfzlw; range_end:; response_count:1; response_revision:781; }","duration":"376.049769ms","start":"2024-04-17T18:04:56.616706Z","end":"2024-04-17T18:04:56.992756Z","steps":["trace[640641266] 'agreement among raft nodes before linearized reading' (duration: 375.61026ms)"],"step_count":1}
{"level":"warn","ts":"2024-04-17T18:04:56.993022Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T18:04:56.616692Z","time spent":"376.313705ms","remote":"127.0.0.1:47848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":3169,"request content":"key:\"/registry/pods/default/mysql-64454c8b5c-jfzlw\" "}
{"level":"warn","ts":"2024-04-17T18:04:56.993029Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"337.4053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:11892"}
{"level":"info","ts":"2024-04-17T18:04:56.993371Z","caller":"traceutil/trace.go:171","msg":"trace[1581270615] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:781; }","duration":"338.214611ms","start":"2024-04-17T18:04:56.655145Z","end":"2024-04-17T18:04:56.993359Z","steps":["trace[1581270615] 'agreement among raft nodes before linearized reading' (duration: 335.789156ms)"],"step_count":1}
{"level":"warn","ts":"2024-04-17T18:04:56.994007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T18:04:56.655132Z","time spent":"338.863067ms","remote":"127.0.0.1:47848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":11914,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
==> etcd [478ddb0e91382700b25ca95dbf3feb9d672ce1347b5c42af01a52430e6a5c6ab] <==
{"level":"info","ts":"2024-04-17T18:02:26.301038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd became leader at term 2"}
{"level":"info","ts":"2024-04-17T18:02:26.301045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1d3fba3e6c6ecbcd elected leader 1d3fba3e6c6ecbcd at term 2"}
{"level":"info","ts":"2024-04-17T18:02:26.303917Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"1d3fba3e6c6ecbcd","local-member-attributes":"{Name:functional-366561 ClientURLs:[https://192.168.39.13:2379]}","request-path":"/0/members/1d3fba3e6c6ecbcd/attributes","cluster-id":"1e01947a35a5ac2c","publish-timeout":"7s"}
{"level":"info","ts":"2024-04-17T18:02:26.30397Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-04-17T18:02:26.304227Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-04-17T18:02:26.304447Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-04-17T18:02:26.307951Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-04-17T18:02:26.307996Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-04-17T18:02:26.313512Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-04-17T18:02:26.346246Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.13:2379"}
{"level":"info","ts":"2024-04-17T18:02:26.346584Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1e01947a35a5ac2c","local-member-id":"1d3fba3e6c6ecbcd","cluster-version":"3.5"}
{"level":"info","ts":"2024-04-17T18:02:26.346707Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-04-17T18:02:26.346757Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-04-17T18:02:39.832929Z","caller":"traceutil/trace.go:171","msg":"trace[749136949] transaction","detail":"{read_only:false; response_revision:310; number_of_response:1; }","duration":"396.662962ms","start":"2024-04-17T18:02:39.436252Z","end":"2024-04-17T18:02:39.832915Z","steps":["trace[749136949] 'process raft request' (duration: 396.520236ms)"],"step_count":1}
{"level":"warn","ts":"2024-04-17T18:02:39.833363Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T18:02:39.436237Z","time spent":"396.76236ms","remote":"127.0.0.1:54928","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4509,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-functional-366561\" mod_revision:304 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-functional-366561\" value_size:4442 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-functional-366561\" > >"}
{"level":"info","ts":"2024-04-17T18:03:02.443477Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2024-04-17T18:03:02.443571Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-366561","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.13:2380"],"advertise-client-urls":["https://192.168.39.13:2379"]}
{"level":"warn","ts":"2024-04-17T18:03:02.443748Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2024-04-17T18:03:02.443902Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2024-04-17T18:03:02.457911Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.13:2379: use of closed network connection"}
{"level":"warn","ts":"2024-04-17T18:03:02.458041Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.13:2379: use of closed network connection"}
{"level":"info","ts":"2024-04-17T18:03:02.458105Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1d3fba3e6c6ecbcd","current-leader-member-id":"1d3fba3e6c6ecbcd"}
{"level":"info","ts":"2024-04-17T18:03:02.46112Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.13:2380"}
{"level":"info","ts":"2024-04-17T18:03:02.461393Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.13:2380"}
{"level":"info","ts":"2024-04-17T18:03:02.461513Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-366561","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.13:2380"],"advertise-client-urls":["https://192.168.39.13:2379"]}
==> kernel <==
18:05:03 up 3 min, 0 users, load average: 1.88, 0.79, 0.30
Linux functional-366561 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [6859c5e8477dab1c555447ed617132d4ba504a478163f5ce75e0491d8a0eeb3f] <==
I0417 18:04:02.020357 1 shared_informer.go:320] Caches are synced for crd-autoregister
I0417 18:04:02.020940 1 shared_informer.go:320] Caches are synced for configmaps
I0417 18:04:02.021926 1 aggregator.go:165] initial CRD sync complete...
I0417 18:04:02.022130 1 autoregister_controller.go:141] Starting autoregister controller
I0417 18:04:02.022240 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0417 18:04:02.022440 1 cache.go:39] Caches are synced for autoregister controller
I0417 18:04:02.084649 1 shared_informer.go:320] Caches are synced for node_authorizer
I0417 18:04:02.096223 1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
I0417 18:04:02.096359 1 policy_source.go:224] refreshing policies
I0417 18:04:02.109995 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0417 18:04:02.929098 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W0417 18:04:03.347918 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.13]
I0417 18:04:03.349828 1 controller.go:615] quota admission added evaluator for: endpoints
I0417 18:04:03.355734 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0417 18:04:03.514107 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0417 18:04:03.526409 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0417 18:04:03.561079 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0417 18:04:03.592615 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0417 18:04:03.599123 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0417 18:04:26.470107 1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.202.175"}
I0417 18:04:31.107558 1 controller.go:615] quota admission added evaluator for: replicasets.apps
I0417 18:04:31.222486 1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.235.46"}
I0417 18:04:31.245287 1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.164.202"}
I0417 18:04:33.934298 1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.147.82"}
E0417 18:04:57.128105 1 upgradeaware.go:427] Error proxying data from client to backend: write tcp 192.168.39.13:33596->192.168.39.13:10250: write: broken pipe
==> kube-controller-manager [1bcd935868e0deede44c4cf609102a4b8e23c92c57550df7affaa8def8dde735] <==
I0417 18:04:15.296045 1 shared_informer.go:320] Caches are synced for expand
I0417 18:04:15.301165 1 shared_informer.go:320] Caches are synced for resource quota
I0417 18:04:15.337154 1 shared_informer.go:320] Caches are synced for ReplicationController
I0417 18:04:15.345465 1 shared_informer.go:320] Caches are synced for resource quota
I0417 18:04:15.726110 1 shared_informer.go:320] Caches are synced for garbage collector
I0417 18:04:15.761758 1 shared_informer.go:320] Caches are synced for garbage collector
I0417 18:04:15.761802 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
I0417 18:04:31.174519 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="61.150226ms"
I0417 18:04:31.198786 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="24.208412ms"
I0417 18:04:31.200127 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="87.312µs"
I0417 18:04:31.206734 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="26.924µs"
I0417 18:04:31.341093 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="39.762624ms"
I0417 18:04:31.366673 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="25.054838ms"
I0417 18:04:31.368566 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="54.754µs"
I0417 18:04:31.375594 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="41.422µs"
I0417 18:04:33.862951 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="56.884002ms"
I0417 18:04:33.894482 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="31.4716ms"
I0417 18:04:33.959348 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="64.803922ms"
I0417 18:04:33.959463 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="78.319µs"
I0417 18:04:35.020977 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="10.36698ms"
I0417 18:04:35.021553 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="24.054µs"
I0417 18:04:46.090788 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="21.274133ms"
I0417 18:04:46.093400 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="73.858µs"
I0417 18:04:46.129322 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="39.023212ms"
I0417 18:04:46.134011 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="44.64µs"
==> kube-controller-manager [aa47091bafb13d5035bb195f6ee2e5886c67ead7fdf2fc40f9136a8fa983bea6] <==
I0417 18:03:29.519509 1 shared_informer.go:320] Caches are synced for ReplicaSet
I0417 18:03:29.531022 1 shared_informer.go:320] Caches are synced for HPA
I0417 18:03:29.532246 1 shared_informer.go:320] Caches are synced for resource quota
I0417 18:03:29.536005 1 shared_informer.go:320] Caches are synced for disruption
I0417 18:03:29.536920 1 shared_informer.go:320] Caches are synced for stateful set
I0417 18:03:29.542688 1 shared_informer.go:320] Caches are synced for attach detach
I0417 18:03:29.549714 1 shared_informer.go:320] Caches are synced for job
I0417 18:03:29.550625 1 shared_informer.go:320] Caches are synced for GC
I0417 18:03:29.552963 1 shared_informer.go:320] Caches are synced for deployment
I0417 18:03:29.551981 1 shared_informer.go:320] Caches are synced for taint-eviction-controller
I0417 18:03:29.555187 1 shared_informer.go:320] Caches are synced for PV protection
I0417 18:03:29.555235 1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
I0417 18:03:29.557751 1 shared_informer.go:320] Caches are synced for daemon sets
I0417 18:03:29.565161 1 shared_informer.go:320] Caches are synced for taint
I0417 18:03:29.565589 1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I0417 18:03:29.566266 1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-366561"
I0417 18:03:29.566556 1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I0417 18:03:29.572011 1 shared_informer.go:320] Caches are synced for persistent volume
I0417 18:03:29.572455 1 shared_informer.go:320] Caches are synced for ephemeral
I0417 18:03:29.574014 1 shared_informer.go:320] Caches are synced for endpoint
I0417 18:03:29.579182 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.479715ms"
I0417 18:03:29.580549 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.579µs"
I0417 18:03:30.021392 1 shared_informer.go:320] Caches are synced for garbage collector
I0417 18:03:30.063236 1 shared_informer.go:320] Caches are synced for garbage collector
I0417 18:03:30.063264 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
==> kube-proxy [52a94f758192796b9d68414d2798e58b00d66ab1635824b8f2e7f35b3296dd52] <==
I0417 18:02:45.817965 1 server_linux.go:69] "Using iptables proxy"
I0417 18:02:45.831296 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.13"]
I0417 18:02:45.917663 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
I0417 18:02:45.917724 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0417 18:02:45.917740 1 server_linux.go:165] "Using iptables Proxier"
I0417 18:02:45.921246 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0417 18:02:45.921624 1 server.go:872] "Version info" version="v1.30.0-rc.2"
I0417 18:02:45.921662 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0417 18:02:45.922822 1 config.go:192] "Starting service config controller"
I0417 18:02:45.923162 1 shared_informer.go:313] Waiting for caches to sync for service config
I0417 18:02:45.923560 1 config.go:101] "Starting endpoint slice config controller"
I0417 18:02:45.923704 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0417 18:02:45.924566 1 config.go:319] "Starting node config controller"
I0417 18:02:45.924602 1 shared_informer.go:313] Waiting for caches to sync for node config
I0417 18:02:46.024465 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0417 18:02:46.024544 1 shared_informer.go:320] Caches are synced for service config
I0417 18:02:46.024811 1 shared_informer.go:320] Caches are synced for node config
==> kube-proxy [d237d1a584d9cad2024a5c96eb3893a636786d7554ded6fe60cf4936dc76bd0e] <==
W0417 18:03:05.596637 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
E0417 18:03:05.596666 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
W0417 18:03:05.596710 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
E0417 18:03:05.596765 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
W0417 18:03:06.936124 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-366561&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
E0417 18:03:06.936173 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-366561&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
W0417 18:03:07.007206 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
E0417 18:03:07.007553 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
W0417 18:03:07.019700 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
E0417 18:03:07.019765 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
W0417 18:03:09.553227 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-366561&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
E0417 18:03:09.553363 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-366561&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
W0417 18:03:09.625657 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
E0417 18:03:09.626178 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
W0417 18:03:10.184540 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
E0417 18:03:10.184715 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
W0417 18:03:13.077401 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-366561&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
E0417 18:03:13.077537 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-366561&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
W0417 18:03:14.699774 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
E0417 18:03:14.699906 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
W0417 18:03:14.853353 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
E0417 18:03:14.853468 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
I0417 18:03:22.696327 1 shared_informer.go:320] Caches are synced for node config
I0417 18:03:23.095363 1 shared_informer.go:320] Caches are synced for service config
I0417 18:03:24.696059 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [0e641817cefff59052b2dba024282cec36f455706fae6bec5b64e25d12999be1] <==
I0417 18:03:05.456175 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
I0417 18:03:05.461797 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0417 18:03:05.462830 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
E0417 18:03:16.928638 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)
E0417 18:03:16.929123 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)
E0417 18:03:16.929423 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)
E0417 18:03:16.989787 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
E0417 18:03:16.994820 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
E0417 18:03:16.996498 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)
E0417 18:03:16.996973 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
E0417 18:03:16.998247 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)
E0417 18:03:16.998555 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
E0417 18:03:16.998703 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
E0417 18:03:17.000966 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
E0417 18:03:17.001248 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
E0417 18:03:17.001493 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)
E0417 18:03:17.001708 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
E0417 18:03:17.001823 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
E0417 18:03:17.001987 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)
I0417 18:03:56.872218 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0417 18:03:56.872393 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0417 18:03:56.872427 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0417 18:03:56.872517 1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
I0417 18:03:56.872588 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController
E0417 18:03:56.872199 1 run.go:74] "command failed" err="finished without leader elect"
==> kube-scheduler [256991d3743fae01871293d72fd9dc862d28ab3964f577ba5df2a4afe4e0dd43] <==
I0417 18:04:00.964297 1 serving.go:380] Generated self-signed cert in-memory
W0417 18:04:01.993754 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0417 18:04:01.993942 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0417 18:04:01.994403 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
W0417 18:04:01.994590 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0417 18:04:02.032352 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0-rc.2"
I0417 18:04:02.032400 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0417 18:04:02.036066 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0417 18:04:02.036353 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0417 18:04:02.036528 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0417 18:04:02.036715 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0417 18:04:02.137602 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Apr 17 18:04:57 functional-366561 kubelet[4034]: I0417 18:04:57.614454 4034 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cxs66\" (UniqueName: \"kubernetes.io/projected/af3e3341-24a0-4157-bc43-d12c80894555-kube-api-access-cxs66\") on node \"functional-366561\" DevicePath \"\""
Apr 17 18:04:57 functional-366561 kubelet[4034]: I0417 18:04:57.614488 4034 reconciler_common.go:289] "Volume detached for volume \"pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d\" (UniqueName: \"kubernetes.io/host-path/af3e3341-24a0-4157-bc43-d12c80894555-pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d\") on node \"functional-366561\" DevicePath \"\""
Apr 17 18:04:58 functional-366561 kubelet[4034]: I0417 18:04:58.089318 4034 scope.go:117] "RemoveContainer" containerID="34598ebd5f47e710bed993410a252d27f67f10fb276ba566909821a7689d5587"
Apr 17 18:04:58 functional-366561 kubelet[4034]: I0417 18:04:58.106910 4034 scope.go:117] "RemoveContainer" containerID="34598ebd5f47e710bed993410a252d27f67f10fb276ba566909821a7689d5587"
Apr 17 18:04:58 functional-366561 kubelet[4034]: E0417 18:04:58.109134 4034 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34598ebd5f47e710bed993410a252d27f67f10fb276ba566909821a7689d5587\": not found" containerID="34598ebd5f47e710bed993410a252d27f67f10fb276ba566909821a7689d5587"
Apr 17 18:04:58 functional-366561 kubelet[4034]: I0417 18:04:58.109224 4034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34598ebd5f47e710bed993410a252d27f67f10fb276ba566909821a7689d5587"} err="failed to get container status \"34598ebd5f47e710bed993410a252d27f67f10fb276ba566909821a7689d5587\": rpc error: code = NotFound desc = an error occurred when try to find container \"34598ebd5f47e710bed993410a252d27f67f10fb276ba566909821a7689d5587\": not found"
Apr 17 18:04:58 functional-366561 kubelet[4034]: I0417 18:04:58.239493 4034 topology_manager.go:215] "Topology Admit Handler" podUID="cb0a7c74-189d-40f0-8b3c-d3fbc6324d0c" podNamespace="default" podName="sp-pod"
Apr 17 18:04:58 functional-366561 kubelet[4034]: E0417 18:04:58.239636 4034 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af3e3341-24a0-4157-bc43-d12c80894555" containerName="myfrontend"
Apr 17 18:04:58 functional-366561 kubelet[4034]: I0417 18:04:58.239667 4034 memory_manager.go:354] "RemoveStaleState removing state" podUID="af3e3341-24a0-4157-bc43-d12c80894555" containerName="myfrontend"
Apr 17 18:04:58 functional-366561 kubelet[4034]: I0417 18:04:58.321404 4034 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d\" (UniqueName: \"kubernetes.io/host-path/cb0a7c74-189d-40f0-8b3c-d3fbc6324d0c-pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d\") pod \"sp-pod\" (UID: \"cb0a7c74-189d-40f0-8b3c-d3fbc6324d0c\") " pod="default/sp-pod"
Apr 17 18:04:58 functional-366561 kubelet[4034]: I0417 18:04:58.321446 4034 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cwsz\" (UniqueName: \"kubernetes.io/projected/cb0a7c74-189d-40f0-8b3c-d3fbc6324d0c-kube-api-access-5cwsz\") pod \"sp-pod\" (UID: \"cb0a7c74-189d-40f0-8b3c-d3fbc6324d0c\") " pod="default/sp-pod"
Apr 17 18:04:58 functional-366561 kubelet[4034]: I0417 18:04:58.771604 4034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af3e3341-24a0-4157-bc43-d12c80894555" path="/var/lib/kubelet/pods/af3e3341-24a0-4157-bc43-d12c80894555/volumes"
Apr 17 18:04:58 functional-366561 kubelet[4034]: E0417 18:04:58.785018 4034 iptables.go:577] "Could not set up iptables canary" err=<
Apr 17 18:04:58 functional-366561 kubelet[4034]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Apr 17 18:04:58 functional-366561 kubelet[4034]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Apr 17 18:04:58 functional-366561 kubelet[4034]: Perhaps ip6tables or your kernel needs to be upgraded.
Apr 17 18:04:58 functional-366561 kubelet[4034]: > table="nat" chain="KUBE-KUBELET-CANARY"
Apr 17 18:05:01 functional-366561 kubelet[4034]: I0417 18:05:01.300530 4034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.76901149 podStartE2EDuration="3.300511764s" podCreationTimestamp="2024-04-17 18:04:58 +0000 UTC" firstStartedPulling="2024-04-17 18:04:58.789693572 +0000 UTC m=+60.177560344" lastFinishedPulling="2024-04-17 18:05:00.321193843 +0000 UTC m=+61.709060618" observedRunningTime="2024-04-17 18:05:01.142611071 +0000 UTC m=+62.530477861" watchObservedRunningTime="2024-04-17 18:05:01.300511764 +0000 UTC m=+62.688378555"
Apr 17 18:05:01 functional-366561 kubelet[4034]: I0417 18:05:01.344652 4034 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ba240c32-9c8b-4857-b346-62c7edb3d934-test-volume\") pod \"ba240c32-9c8b-4857-b346-62c7edb3d934\" (UID: \"ba240c32-9c8b-4857-b346-62c7edb3d934\") "
Apr 17 18:05:01 functional-366561 kubelet[4034]: I0417 18:05:01.345101 4034 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmm8b\" (UniqueName: \"kubernetes.io/projected/ba240c32-9c8b-4857-b346-62c7edb3d934-kube-api-access-zmm8b\") pod \"ba240c32-9c8b-4857-b346-62c7edb3d934\" (UID: \"ba240c32-9c8b-4857-b346-62c7edb3d934\") "
Apr 17 18:05:01 functional-366561 kubelet[4034]: I0417 18:05:01.345040 4034 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba240c32-9c8b-4857-b346-62c7edb3d934-test-volume" (OuterVolumeSpecName: "test-volume") pod "ba240c32-9c8b-4857-b346-62c7edb3d934" (UID: "ba240c32-9c8b-4857-b346-62c7edb3d934"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Apr 17 18:05:01 functional-366561 kubelet[4034]: I0417 18:05:01.350488 4034 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba240c32-9c8b-4857-b346-62c7edb3d934-kube-api-access-zmm8b" (OuterVolumeSpecName: "kube-api-access-zmm8b") pod "ba240c32-9c8b-4857-b346-62c7edb3d934" (UID: "ba240c32-9c8b-4857-b346-62c7edb3d934"). InnerVolumeSpecName "kube-api-access-zmm8b". PluginName "kubernetes.io/projected", VolumeGidValue ""
Apr 17 18:05:01 functional-366561 kubelet[4034]: I0417 18:05:01.446514 4034 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zmm8b\" (UniqueName: \"kubernetes.io/projected/ba240c32-9c8b-4857-b346-62c7edb3d934-kube-api-access-zmm8b\") on node \"functional-366561\" DevicePath \"\""
Apr 17 18:05:01 functional-366561 kubelet[4034]: I0417 18:05:01.446591 4034 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ba240c32-9c8b-4857-b346-62c7edb3d934-test-volume\") on node \"functional-366561\" DevicePath \"\""
Apr 17 18:05:02 functional-366561 kubelet[4034]: I0417 18:05:02.112069 4034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3e7fdb0dbcb6297d728449e3c43a6e474d573e183bce1f877adc31c533c4c27"
==> storage-provisioner [27bc1ca9864b2ef502be0415eb0d969895ef1350c9eb88e73527cdb29a9f1c8b] <==
I0417 18:04:18.952653 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0417 18:04:18.961274 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0417 18:04:18.961497 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0417 18:04:36.374388 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0417 18:04:36.374757 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-366561_9cec50bb-e786-46d5-b791-866f00d0ed89!
I0417 18:04:36.376119 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a4a7f19e-ae80-4dc9-aee1-c37564a8ef17", APIVersion:"v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-366561_9cec50bb-e786-46d5-b791-866f00d0ed89 became leader
I0417 18:04:36.475218 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-366561_9cec50bb-e786-46d5-b791-866f00d0ed89!
I0417 18:04:38.240070 1 controller.go:1332] provision "default/myclaim" class "standard": started
I0417 18:04:38.242788 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"fd397dbb-c491-42d5-a6ab-c49321dbb37d", APIVersion:"v1", ResourceVersion:"733", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I0417 18:04:38.240700 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard 19763129-b30d-4c94-821b-abbe36aa5948 382 0 2024-04-17 18:02:44 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-04-17 18:02:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d &PersistentVolumeClaim{ObjectMeta:{myclaim default fd397dbb-c491-42d5-a6ab-c49321dbb37d 733 0 2024-04-17 18:04:38 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2024-04-17 18:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-04-17 18:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I0417 18:04:38.247761 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d" provisioned
I0417 18:04:38.248087 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I0417 18:04:38.248365 1 volume_store.go:212] Trying to save persistentvolume "pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d"
I0417 18:04:38.281792 1 volume_store.go:219] persistentvolume "pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d" saved
I0417 18:04:38.339435 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"fd397dbb-c491-42d5-a6ab-c49321dbb37d", APIVersion:"v1", ResourceVersion:"733", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d
==> storage-provisioner [ea3a7a71a7dc10c70e0062c24936d2b324cf229928f68dded4693a554ad91238] <==
I0417 18:04:03.178798 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0417 18:04:03.181648 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-366561 -n functional-366561
helpers_test.go:261: (dbg) Run: kubectl --context functional-366561 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context functional-366561 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-366561 describe pod busybox-mount:
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-366561/192.168.39.13
Start Time: Wed, 17 Apr 2024 18:04:57 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.9
IPs:
IP: 10.244.0.9
Containers:
mount-munger:
Container ID: containerd://f6a079fd72a2a5d710a189fa7ec9d3ed6d6d5025f686c32e695f14a11e5fbb42
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 17 Apr 2024 18:04:59 +0000
Finished: Wed, 17 Apr 2024 18:04:59 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zmm8b (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-zmm8b:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6s default-scheduler Successfully assigned default/busybox-mount to functional-366561
Normal Pulling 7s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 5s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.477s (1.478s including waiting). Image size: 2395207 bytes.
Normal Created 5s kubelet Created container mount-munger
Normal Started 5s kubelet Started container mount-munger
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (33.14s)