=== RUN TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd
=== CONT TestFunctional/parallel/ServiceCmd
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run: kubectl --context functional-170143 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run: kubectl --context functional-170143 expose deployment hello-node --type=NodePort --port=8080
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-9t5g8" [d0fb1920-69a5-45d8-b407-9bcb5b0a566c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
=== CONT TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-9t5g8" [d0fb1920-69a5-45d8-b407-9bcb5b0a566c] Running
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 36.1134299s
functional_test.go:1449: (dbg) Run: out/minikube-windows-amd64.exe -p functional-170143 service list
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1449: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 service list: (1.8753125s)
functional_test.go:1463: (dbg) Run: out/minikube-windows-amd64.exe -p functional-170143 service --namespace=default --https --url hello-node
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1392: Failed to sent interrupt to proc not supported by windows
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-170143 service --namespace=default --https --url hello-node: exit status 1 (35m25.3913569s)
-- stdout --
https://127.0.0.1:57848
-- /stdout --
** stderr **
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
** /stderr **
functional_test.go:1465: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-170143 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1402: service test failed - dumping debug information
functional_test.go:1403: -----------------------service failure post-mortem--------------------------------
functional_test.go:1406: (dbg) Run: kubectl --context functional-170143 describe po hello-node
functional_test.go:1410: hello-node pod describe:
Name: hello-node-5fcdfb5cc4-9t5g8
Namespace: default
Priority: 0
Node: functional-170143/192.168.49.2
Start Time: Mon, 07 Nov 2022 17:05:39 +0000
Labels: app=hello-node
pod-template-hash=5fcdfb5cc4
Annotations: <none>
Status: Running
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
Controlled By: ReplicaSet/hello-node-5fcdfb5cc4
Containers:
echoserver:
Container ID: docker://785b4ed736b85a2190c29751ef74c4f8cc52ffa6072e8349168f3b3be175a1ec
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 07 Nov 2022 17:06:10 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8l6tm (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-8l6tm:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-5fcdfb5cc4-9t5g8 to functional-170143
Normal Pulling 36m kubelet, functional-170143 Pulling image "k8s.gcr.io/echoserver:1.8"
Normal Pulled 35m kubelet, functional-170143 Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 28.4153628s
Normal Created 35m kubelet, functional-170143 Created container echoserver
Normal Started 35m kubelet, functional-170143 Started container echoserver
Name: hello-node-connect-6458c8fb6f-pstw6
Namespace: default
Priority: 0
Node: functional-170143/192.168.49.2
Start Time: Mon, 07 Nov 2022 17:08:20 +0000
Labels: app=hello-node-connect
pod-template-hash=6458c8fb6f
Annotations: <none>
Status: Running
IP: 172.17.0.7
IPs:
IP: 172.17.0.7
Controlled By: ReplicaSet/hello-node-connect-6458c8fb6f
Containers:
echoserver:
Container ID: docker://3cd6c2db5f19184868bd08576732e3f52822925f39ad71b5993a9fa06a22dd77
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 07 Nov 2022 17:08:24 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jnzwc (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-jnzwc:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-connect-6458c8fb6f-pstw6 to functional-170143
Normal Pulled 33m kubelet, functional-170143 Container image "k8s.gcr.io/echoserver:1.8" already present on machine
Normal Created 33m kubelet, functional-170143 Created container echoserver
Normal Started 33m kubelet, functional-170143 Started container echoserver
functional_test.go:1412: (dbg) Run: kubectl --context functional-170143 logs -l app=hello-node
functional_test.go:1416: hello-node logs:
functional_test.go:1418: (dbg) Run: kubectl --context functional-170143 describe svc hello-node
functional_test.go:1422: hello-node svc describe:
Name: hello-node
Namespace: default
Labels: app=hello-node
Annotations: <none>
Selector: app=hello-node
Type: NodePort
IP: 10.100.207.192
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31883/TCP
Endpoints: 172.17.0.3:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect functional-170143
helpers_test.go:235: (dbg) docker inspect functional-170143:
-- stdout --
[
{
"Id": "286f6c96dd5661d9d7d942581b5e175b2720f36a9678a851c978d552b61a4515",
"Created": "2022-11-07T17:02:22.3086205Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 27330,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-11-07T17:02:23.3498703Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
"ResolvConfPath": "/var/lib/docker/containers/286f6c96dd5661d9d7d942581b5e175b2720f36a9678a851c978d552b61a4515/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/286f6c96dd5661d9d7d942581b5e175b2720f36a9678a851c978d552b61a4515/hostname",
"HostsPath": "/var/lib/docker/containers/286f6c96dd5661d9d7d942581b5e175b2720f36a9678a851c978d552b61a4515/hosts",
"LogPath": "/var/lib/docker/containers/286f6c96dd5661d9d7d942581b5e175b2720f36a9678a851c978d552b61a4515/286f6c96dd5661d9d7d942581b5e175b2720f36a9678a851c978d552b61a4515-json.log",
"Name": "/functional-170143",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-170143:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-170143",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4194304000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/037ff98fecd1928a702b7ead72b0f52271bee45c53e0caef80129b972bd0c537-init/diff:/var/lib/docker/overlay2/5ba40928978efc1ee3b35421e2a49e4e2a7d59d61b07bb8e461b5416c8a7cee7/diff:/var/lib/docker/overlay2/67e02326f2fb9638b3c744df240d022783ccecb7d0e13e0d4028b0f8bf17e69d/diff:/var/lib/docker/overlay2/2df41d3bee4190176a765702135566ea66b1390e8b91dfa86b8de2bce135a93a/diff:/var/lib/docker/overlay2/3ec94dbaa89905250e2398ca72e3bb9ff5dccddd8b415085183015f908fee35f/diff:/var/lib/docker/overlay2/3ff2e3a3d014a61bdc0a08d62538ff8c84667c0284decf8ecda52f68283ff0fb/diff:/var/lib/docker/overlay2/bec12fe29cd5fb8e9a7e5bb928cb25b20213dd7883f37ea7dd0a8e3bc0351052/diff:/var/lib/docker/overlay2/21c29267c8a16c82c45149aee257177584b1ce7c75fa787decd6c03a640936f7/diff:/var/lib/docker/overlay2/5552452888ed9ac6a45e539159cccc1e649ef7ad0bc04a4418eebab44d92e666/diff:/var/lib/docker/overlay2/3f5659bfc1d27650ea46807074a281c02900176a5f42ac3ce1101e612aea49a4/diff:/var/lib/docker/overlay2/95ed14
d67ee43712c9773f372551bf224bbcbf05234904cb75bfe650e5a9b431/diff:/var/lib/docker/overlay2/c61bea1335a18e64dabe990546948a49a1e791d643b48037370421d0751659c3/diff:/var/lib/docker/overlay2/4bceff48ae8e97fbcd073948091f9c7dbeadc230b98de67471c5522b9c386672/diff:/var/lib/docker/overlay2/23bacba3c342644af413c4af4dd2d246c778f3794857f6249648a877a053a59c/diff:/var/lib/docker/overlay2/b52423693db548690f91d1cd1a682e7dcffed995839ad13f0c371c2d681d58ae/diff:/var/lib/docker/overlay2/78ed02992e8d5b101283c1328bd5aaa12d7e0ca041f267cc87df49ef21d9bb03/diff:/var/lib/docker/overlay2/46157251f5db6a6570ed965e54b6f9c571885b984df59133027ccf004684e35b/diff:/var/lib/docker/overlay2/a7138fb69aba5dad874e92c39963591ac31b8c00283be1cef1f97bb03e29e95b/diff:/var/lib/docker/overlay2/c758e4b48f926dc6128c8daee3fc24a31cf68d0c856315d42cd496a0dbdd8539/diff:/var/lib/docker/overlay2/177fe0e8ee94dbc81e32cb39d5d299febe5bdcc240161d4b1835668fe03b5209/diff:/var/lib/docker/overlay2/f079d80f0588e1138baa92eb5c6d7f1bd3b748adbba870d85b973e09f3ebf494/diff:/var/lib/d
ocker/overlay2/c3813cada301ad2ba06f263b5ccf3e0b01ae80626c1d9caa7145c8b44f41463e/diff:/var/lib/docker/overlay2/72b362c3acbe525943f481d496d0727bf0f806a59448acd97435a15c292fef7e/diff:/var/lib/docker/overlay2/f3dae2918bbd88ecf6fa92ce58b695b5b7c2da5701725c4de1346a5152bfb602/diff:/var/lib/docker/overlay2/a9aa7189cf37379174133f86b5cd20db821dffd303a69bb90d8b837ef9314cae/diff:/var/lib/docker/overlay2/f2580cf4053e61b8bea5cd979c14376e4cb354a10cabb06928d54c1685d717ad/diff:/var/lib/docker/overlay2/935a0de03d362bfbb94f9caed18a864b47c082fd03de4bfa5ea3296602ab831a/diff:/var/lib/docker/overlay2/3cff685fb531dd4d8712d453d4acd726381268d9ddcd0c57a932182872cbf384/diff:/var/lib/docker/overlay2/112b35fd6eb67f7dfac734ed32e36fb98e01f15bd9c239c2f80d0bf851060ea4/diff:/var/lib/docker/overlay2/01282a02b23965342a99a1d1cc886e98e3cdc825c6ca80b04373c4406c9aa4f3/diff:/var/lib/docker/overlay2/bd54f122cc195ba2f796884b001defe75facaad0c89ccc34a6f6465aaa917fe9/diff:/var/lib/docker/overlay2/20dfd6c01cb2b243e552c3e422dd7b551e0db65fb0c630c438801d475ad
f77a1/diff:/var/lib/docker/overlay2/411ec7d4646f3c8ed6c04c781054e871311645faa7de90212e5c5454192092fd/diff:/var/lib/docker/overlay2/bb233cf9945b014c96c4bcbef2e9ef2f1e040f65910db652eb424af82e93768d/diff:/var/lib/docker/overlay2/a6de3a7d987b965f42f8379040ffd401aad9d38f67ac126754e8d62b555407aa/diff:/var/lib/docker/overlay2/b2ce15147e01c2b1eff488a0aec2cdcf950484589bf948d4b1f3a8a876232d09/diff:/var/lib/docker/overlay2/8a119f66dd46b7cc5f5ba77598b3979bf10ddf84081ea4872ec2ce3375d41684/diff:/var/lib/docker/overlay2/b3c7202a41b63567d929a27b911caefdba403bae7ea5f11b89f717ecb1013955/diff:/var/lib/docker/overlay2/d87eb4edb251e5b57913be1bf6653b8ad0988f5aefaf73d12984c2b91801af17/diff:/var/lib/docker/overlay2/df756f877bb755e1124e9ccaa62bd29d76f04822f12787db45118fcba1de223d/diff:/var/lib/docker/overlay2/ba2334ebb657af4b27997ce445bfc2ce0f740fb6fe3edba5a315042fd325a7d3/diff:/var/lib/docker/overlay2/ba4ef7e8994716049d65e5b49db39352db8c77cd45684b9516c827f4114572cb/diff:/var/lib/docker/overlay2/3df6d706ee5529d758e5ed38fd5b49f5733ae7
45d03cb146ad24eb8be305a2a3/diff",
"MergedDir": "/var/lib/docker/overlay2/037ff98fecd1928a702b7ead72b0f52271bee45c53e0caef80129b972bd0c537/merged",
"UpperDir": "/var/lib/docker/overlay2/037ff98fecd1928a702b7ead72b0f52271bee45c53e0caef80129b972bd0c537/diff",
"WorkDir": "/var/lib/docker/overlay2/037ff98fecd1928a702b7ead72b0f52271bee45c53e0caef80129b972bd0c537/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-170143",
"Source": "/var/lib/docker/volumes/functional-170143/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-170143",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-170143",
"name.minikube.sigs.k8s.io": "functional-170143",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "2f528b4b5171d81aba0a127b13b266c0f0c768f036ee89240e540e938981ca50",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "57560"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "57561"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "57562"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "57563"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "57559"
}
]
},
"SandboxKey": "/var/run/docker/netns/2f528b4b5171",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-170143": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"286f6c96dd56",
"functional-170143"
],
"NetworkID": "416315494de4a4776bd847db2873960fee12378f7680524d5296a2ef6fd9edc7",
"EndpointID": "05fe04bf5c6ad6bd8b44d7394ed36d9d9ee62dd9a3720799f879052b396bb5a5",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-170143 -n functional-170143
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-170143 -n functional-170143: (1.9156279s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-windows-amd64.exe -p functional-170143 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 logs -n 25: (3.3600363s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs:
-- stdout --
*
* ==> Audit <==
* |----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
| service | functional-170143 service | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:06 GMT | |
| | --namespace=default --https | | | | | |
| | --url hello-node | | | | | |
| image | functional-170143 image load --daemon | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:06 GMT | 07 Nov 22 17:06 GMT |
| | gcr.io/google-containers/addon-resizer:functional-170143 | | | | | |
| image | functional-170143 image ls | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:06 GMT | 07 Nov 22 17:06 GMT |
| image | functional-170143 image save | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:06 GMT | 07 Nov 22 17:06 GMT |
| | gcr.io/google-containers/addon-resizer:functional-170143 | | | | | |
| | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar | | | | | |
| image | functional-170143 image rm | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:06 GMT | 07 Nov 22 17:06 GMT |
| | gcr.io/google-containers/addon-resizer:functional-170143 | | | | | |
| image | functional-170143 image ls | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:06 GMT | 07 Nov 22 17:06 GMT |
| image | functional-170143 image load | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:06 GMT | 07 Nov 22 17:06 GMT |
| | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar | | | | | |
| image | functional-170143 image ls | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:06 GMT | 07 Nov 22 17:07 GMT |
| image | functional-170143 image save --daemon | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:07 GMT | 07 Nov 22 17:07 GMT |
| | gcr.io/google-containers/addon-resizer:functional-170143 | | | | | |
| ssh | functional-170143 ssh echo | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:07 GMT | 07 Nov 22 17:07 GMT |
| | hello | | | | | |
| ssh | functional-170143 ssh cat | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:07 GMT | 07 Nov 22 17:07 GMT |
| | /etc/hostname | | | | | |
| dashboard | --url --port 36195 | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:07 GMT | |
| | -p functional-170143 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| tunnel | functional-170143 tunnel | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | |
| | --alsologtostderr | | | | | |
| addons | functional-170143 addons list | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
| addons | functional-170143 addons list | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
| | -o json | | | | | |
| update-context | functional-170143 | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-170143 | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-170143 | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| image | functional-170143 image ls | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
| | --format short | | | | | |
| image | functional-170143 image ls | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
| | --format yaml | | | | | |
| ssh | functional-170143 ssh pgrep | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | |
| | buildkitd | | | | | |
| image | functional-170143 image ls | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
| | --format json | | | | | |
| image | functional-170143 image build -t | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
| | localhost/my-image:functional-170143 | | | | | |
| | testdata\build | | | | | |
| image | functional-170143 image ls | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
| | --format table | | | | | |
| image | functional-170143 image ls | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
|----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/11/07 17:06:12
Running on machine: minikube2
Binary: Built with gc go1.19.2 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1107 17:06:12.828045 7932 out.go:296] Setting OutFile to fd 964 ...
I1107 17:06:12.932233 7932 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 17:06:12.932233 7932 out.go:309] Setting ErrFile to fd 968...
I1107 17:06:12.932233 7932 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 17:06:12.955241 7932 out.go:303] Setting JSON to false
I1107 17:06:12.958239 7932 start.go:116] hostinfo: {"hostname":"minikube2","uptime":5410,"bootTime":1667835362,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
W1107 17:06:12.958239 7932 start.go:124] gopshost.Virtualization returned error: not implemented yet
I1107 17:06:12.962253 7932 out.go:177] * [functional-170143] minikube v1.28.0 sur Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
I1107 17:06:12.966244 7932 notify.go:220] Checking for updates...
I1107 17:06:12.968243 7932 out.go:177] - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
I1107 17:06:12.970258 7932 out.go:177] - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
I1107 17:06:12.973275 7932 out.go:177] - MINIKUBE_LOCATION=15310
I1107 17:06:12.976233 7932 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1107 17:06:12.979239 7932 config.go:180] Loaded profile config "functional-170143": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1107 17:06:12.980244 7932 driver.go:365] Setting default libvirt URI to qemu:///system
I1107 17:06:13.331754 7932 docker.go:137] docker version: linux-20.10.20
I1107 17:06:13.345743 7932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1107 17:06:14.069739 7932 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-11-07 17:06:13.5155383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1107 17:06:14.075741 7932 out.go:177] * Utilisation du pilote docker basé sur le profil existant
I1107 17:06:14.077752 7932 start.go:282] selected driver: docker
I1107 17:06:14.077752 7932 start.go:808] validating driver "docker" against &{Name:functional-170143 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-170143 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regist
ry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1107 17:06:14.077752 7932 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1107 17:06:14.157382 7932 out.go:177]
W1107 17:06:14.160415 7932 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
I1107 17:06:14.165378 7932 out.go:177]
*
* ==> Docker <==
* -- Logs begin at Mon 2022-11-07 17:02:24 UTC, end at Mon 2022-11-07 17:41:48 UTC. --
Nov 07 17:04:53 functional-170143 dockerd[8121]: time="2022-11-07T17:04:53.547706300Z" level=info msg="Loading containers: start."
Nov 07 17:04:53 functional-170143 dockerd[8121]: time="2022-11-07T17:04:53.995554500Z" level=info msg="ignoring event" container=89de05a3b74d38a2ff938c03814c6fdf6722cd6fa17a02770be1e5e2a2611b3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.355352000Z" level=info msg="Removing stale sandbox 22a3d72c91eaf04eecd915d47dd0ae37c0cc184fc27957d1c4809114b87269e8 (89de05a3b74d38a2ff938c03814c6fdf6722cd6fa17a02770be1e5e2a2611b3b)"
Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.366202700Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 8ac4136d2bc36547ef6890b6aae764b53ef77d70fb9f04e8d7a141ba8e9457bf 9db520486c82bb5beecd69889569e2b05cc4520dc7901e467e603bcfbd694ecc], retrying...."
Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.545914900Z" level=info msg="Removing stale sandbox 57d62f70e24e22cc4fb0892e05f2f14ac6dd2dd2dbc2b07db5295a4e970ae3fe (9afebb5975cea7c01d374265f9a95b92e4dea3431eef9819afcf62fd151aa235)"
Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.556669200Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 8ac4136d2bc36547ef6890b6aae764b53ef77d70fb9f04e8d7a141ba8e9457bf 232fe3681e82bf3610e01d9cd14b90507af0012ff7855fd57e53e524faeffb5f], retrying...."
Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.648502800Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.780079200Z" level=info msg="Loading containers: done."
Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.845578500Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.845746600Z" level=info msg="Daemon has completed initialization"
Nov 07 17:04:54 functional-170143 systemd[1]: Started Docker Application Container Engine.
Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.940435600Z" level=info msg="API listen on [::]:2376"
Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.950115400Z" level=info msg="API listen on /var/run/docker.sock"
Nov 07 17:05:00 functional-170143 dockerd[8121]: time="2022-11-07T17:05:00.786467000Z" level=info msg="ignoring event" container=6e761f8447a3bcc89ceb2dd9090c9c4e42d57accb71c0f3c50067986670de7c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:05:00 functional-170143 dockerd[8121]: time="2022-11-07T17:05:00.975875500Z" level=info msg="ignoring event" container=9259c007b570a8dd11c61a4c99b15df8b1cd1c836624c55de5fefdb65f57754d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:05:00 functional-170143 dockerd[8121]: time="2022-11-07T17:05:00.976240300Z" level=info msg="ignoring event" container=b990ee9fa61f66ef72e67668a397591995ff41184f0e78a87615265026764204 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:05:00 functional-170143 dockerd[8121]: time="2022-11-07T17:05:00.976449900Z" level=info msg="ignoring event" container=1aa7c0b88e6c8bb06efc89ffa49afa51d9ba4de48d74b0ddecc22f8d4ceb7288 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:05:00 functional-170143 dockerd[8121]: time="2022-11-07T17:05:00.978909800Z" level=info msg="ignoring event" container=c11858a245ccea0dd37dddb3f929b1cd3d74c01bad805a2bedaf17cd13a89e2d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:05:01 functional-170143 dockerd[8121]: time="2022-11-07T17:05:01.074881600Z" level=info msg="ignoring event" container=5091149bb59264b580b37f0b0a4f5ad0f4d3ad9add76e8b5df4ea577a1ecb689 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:05:02 functional-170143 dockerd[8121]: time="2022-11-07T17:05:02.728996100Z" level=info msg="ignoring event" container=070256b26e4ea3128744a749d4e545c0a30c7eb6ae633b5d5c3897a1815c84e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:05:17 functional-170143 dockerd[8121]: time="2022-11-07T17:05:17.797287900Z" level=info msg="ignoring event" container=602e34b94dd17e190e2774ee0caa46fae8fccf76152c627d8bd7b35c3dbddf36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:08:05 functional-170143 dockerd[8121]: time="2022-11-07T17:08:05.055533700Z" level=info msg="ignoring event" container=1fe32c7d05e13a0aaffee67e588e3cc96000d4e1d009c29835db307476c28f81 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:08:05 functional-170143 dockerd[8121]: time="2022-11-07T17:08:05.211499000Z" level=info msg="ignoring event" container=61c0075177518d812865641a29d03b4ca5b0d19409b37586778cf4c3b867c828 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:08:42 functional-170143 dockerd[8121]: time="2022-11-07T17:08:42.438894600Z" level=info msg="ignoring event" container=b48f34b1c799d586061268710b4eff4e674a8de872823a3302d9e06f33097a3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:08:43 functional-170143 dockerd[8121]: time="2022-11-07T17:08:43.083755500Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
2fcdc268beef1 nginx@sha256:2452715dd322b3273419652b7721b64aa60305f606ef7a674ae28b6f12d155a3 33 minutes ago Running nginx 0 836e27720f791
3cd6c2db5f191 82e4c8a736a4f 33 minutes ago Running echoserver 0 af36bfdb4c07a
765ca37369cd7 nginx@sha256:943c25b4b66b332184d5ba6bb18234273551593016c0e0ae906bab111548239f 33 minutes ago Running myfrontend 0 82afd834e711f
e5af10ec33df3 mysql@sha256:0e3435e72c493aec752d8274379b1eac4d634f47a7781a7a92b8636fa1dc94c1 34 minutes ago Running mysql 0 5a8450d94fd7e
785b4ed736b85 k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 35 minutes ago Running echoserver 0 9f817759ea085
ed1bd51d97794 5185b96f0becf 36 minutes ago Running coredns 3 f1c4fed1ed70d
0300249d11be5 6e38f40d628db 36 minutes ago Running storage-provisioner 3 d277e588fde7f
d333e1a26cba3 beaaf00edd38a 36 minutes ago Running kube-proxy 3 81603c538e45d
afb78fb3244ca 0346dbd74bcb9 36 minutes ago Running kube-apiserver 0 7e2423dff2eff
c93c8ccb82ea9 6d23ec0e8b87e 36 minutes ago Running kube-scheduler 3 72b6a9854c8af
28821c1a02306 a8a176a5d5d69 36 minutes ago Running etcd 3 7d06efd8f3096
b7062e63f12a6 6039992312758 36 minutes ago Running kube-controller-manager 3 c0ddd34f7d6d8
2ac37c176ed7f 6e38f40d628db 37 minutes ago Exited storage-provisioner 2 c3924db868421
01399ac93dbc8 6039992312758 37 minutes ago Exited kube-controller-manager 2 07e597b2fc291
16ede80c27553 5185b96f0becf 37 minutes ago Exited coredns 2 e160201bab365
ea4cf65607784 6d23ec0e8b87e 37 minutes ago Exited kube-scheduler 2 85b79210c2550
b3b6091afc11b beaaf00edd38a 37 minutes ago Exited kube-proxy 2 259799cb2778d
ae63504cf46ee a8a176a5d5d69 37 minutes ago Exited etcd 2 9a58d0f050c52
*
* ==> coredns [16ede80c2755] <==
* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> coredns [ed1bd51d9779] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
*
* ==> describe nodes <==
* Name: functional-170143
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-170143
kubernetes.io/os=linux
minikube.k8s.io/commit=a8d0d2851e022d93d0c1376f6d2f8095068de262
minikube.k8s.io/name=functional-170143
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_11_07T17_03_00_0700
minikube.k8s.io/version=v1.28.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 07 Nov 2022 17:02:55 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-170143
AcquireTime: <unset>
RenewTime: Mon, 07 Nov 2022 17:41:39 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 07 Nov 2022 17:39:53 +0000 Mon, 07 Nov 2022 17:02:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 07 Nov 2022 17:39:53 +0000 Mon, 07 Nov 2022 17:02:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 07 Nov 2022 17:39:53 +0000 Mon, 07 Nov 2022 17:02:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 07 Nov 2022 17:39:53 +0000 Mon, 07 Nov 2022 17:03:11 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-170143
Capacity:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
System Info:
Machine ID: 996614ec4c814b87b7ec8ebee3d0e8c9
System UUID: 996614ec4c814b87b7ec8ebee3d0e8c9
Boot ID: 5d9b34fc-681b-4fde-9fda-bd2b0089dce3
Kernel Version: 5.10.102.1-microsoft-standard-WSL2
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.20
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-5fcdfb5cc4-9t5g8 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 36m
default hello-node-connect-6458c8fb6f-pstw6 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 33m
default mysql-596b7fcdbf-f99rm 600m (3%!)(MISSING) 700m (4%!)(MISSING) 512Mi (0%!)(MISSING) 700Mi (1%!)(MISSING) 35m
default nginx-svc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 33m
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 33m
kube-system coredns-565d847f94-gd62f 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 38m
kube-system etcd-functional-170143 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-apiserver-functional-170143 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 36m
kube-system kube-controller-manager-functional-170143 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-proxy-phtqg 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-scheduler-functional-170143 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (8%!)(MISSING) 700m (4%!)(MISSING)
memory 682Mi (1%!)(MISSING) 870Mi (1%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 38m kube-proxy
Normal Starting 36m kube-proxy
Normal Starting 37m kube-proxy
Normal NodeHasSufficientMemory 39m (x7 over 39m) kubelet Node functional-170143 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 39m (x7 over 39m) kubelet Node functional-170143 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 39m (x6 over 39m) kubelet Node functional-170143 status is now: NodeHasSufficientPID
Normal Starting 38m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 38m kubelet Node functional-170143 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 38m kubelet Node functional-170143 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 38m kubelet Node functional-170143 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 38m kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 38m node-controller Node functional-170143 event: Registered Node functional-170143 in Controller
Normal NodeReady 38m kubelet Node functional-170143 status is now: NodeReady
Normal RegisteredNode 37m node-controller Node functional-170143 event: Registered Node functional-170143 in Controller
Normal Starting 36m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 36m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 36m (x8 over 36m) kubelet Node functional-170143 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 36m (x8 over 36m) kubelet Node functional-170143 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 36m (x7 over 36m) kubelet Node functional-170143 status is now: NodeHasSufficientPID
Normal RegisteredNode 36m node-controller Node functional-170143 event: Registered Node functional-170143 in Controller
*
* ==> dmesg <==
* [Nov 7 17:16] WSL2: Performing memory compaction.
[Nov 7 17:17] WSL2: Performing memory compaction.
[Nov 7 17:18] WSL2: Performing memory compaction.
[Nov 7 17:19] WSL2: Performing memory compaction.
[Nov 7 17:20] WSL2: Performing memory compaction.
[Nov 7 17:21] WSL2: Performing memory compaction.
[Nov 7 17:22] WSL2: Performing memory compaction.
[Nov 7 17:23] WSL2: Performing memory compaction.
[Nov 7 17:24] WSL2: Performing memory compaction.
[Nov 7 17:25] WSL2: Performing memory compaction.
[Nov 7 17:26] WSL2: Performing memory compaction.
[Nov 7 17:27] WSL2: Performing memory compaction.
[Nov 7 17:28] WSL2: Performing memory compaction.
[Nov 7 17:29] WSL2: Performing memory compaction.
[Nov 7 17:30] WSL2: Performing memory compaction.
[Nov 7 17:31] WSL2: Performing memory compaction.
[Nov 7 17:32] WSL2: Performing memory compaction.
[Nov 7 17:33] WSL2: Performing memory compaction.
[Nov 7 17:34] WSL2: Performing memory compaction.
[Nov 7 17:35] WSL2: Performing memory compaction.
[Nov 7 17:36] WSL2: Performing memory compaction.
[Nov 7 17:37] WSL2: Performing memory compaction.
[Nov 7 17:39] WSL2: Performing memory compaction.
[Nov 7 17:40] WSL2: Performing memory compaction.
[Nov 7 17:41] WSL2: Performing memory compaction.
*
* ==> etcd [28821c1a0230] <==
* {"level":"warn","ts":"2022-11-07T17:07:58.111Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"999.1652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8262"}
{"level":"info","ts":"2022-11-07T17:07:58.111Z","caller":"traceutil/trace.go:171","msg":"trace[1416244392] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:801; }","duration":"999.2252ms","start":"2022-11-07T17:07:57.112Z","end":"2022-11-07T17:07:58.111Z","steps":["trace[1416244392] 'range keys from in-memory index tree' (duration: 999.0124ms)"],"step_count":1}
{"level":"info","ts":"2022-11-07T17:07:58.111Z","caller":"traceutil/trace.go:171","msg":"trace[135707562] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:801; }","duration":"372.2708ms","start":"2022-11-07T17:07:57.739Z","end":"2022-11-07T17:07:58.111Z","steps":["trace[135707562] 'range keys from in-memory index tree' (duration: 371.8672ms)"],"step_count":1}
{"level":"warn","ts":"2022-11-07T17:07:58.111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T17:07:57.739Z","time spent":"372.4093ms","remote":"127.0.0.1:57340","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
{"level":"warn","ts":"2022-11-07T17:07:58.111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T17:07:57.112Z","time spent":"999.3325ms","remote":"127.0.0.1:57314","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":3,"response size":8286,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
{"level":"warn","ts":"2022-11-07T17:07:58.111Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"695.4235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"}
{"level":"info","ts":"2022-11-07T17:07:58.112Z","caller":"traceutil/trace.go:171","msg":"trace[1956856599] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:801; }","duration":"695.976ms","start":"2022-11-07T17:07:57.416Z","end":"2022-11-07T17:07:58.112Z","steps":["trace[1956856599] 'count revisions from in-memory index tree' (duration: 695.1103ms)"],"step_count":1}
{"level":"warn","ts":"2022-11-07T17:07:58.112Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T17:07:57.415Z","time spent":"696.1507ms","remote":"127.0.0.1:57346","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":4,"response size":31,"request content":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true "}
{"level":"warn","ts":"2022-11-07T17:07:58.111Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"266.0972ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
{"level":"info","ts":"2022-11-07T17:07:58.112Z","caller":"traceutil/trace.go:171","msg":"trace[2109282954] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:801; }","duration":"267.0168ms","start":"2022-11-07T17:07:57.845Z","end":"2022-11-07T17:07:58.112Z","steps":["trace[2109282954] 'count revisions from in-memory index tree' (duration: 265.9218ms)"],"step_count":1}
{"level":"info","ts":"2022-11-07T17:15:12.432Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":959}
{"level":"info","ts":"2022-11-07T17:15:12.434Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":959,"took":"1.3686ms"}
{"level":"info","ts":"2022-11-07T17:20:12.448Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1170}
{"level":"info","ts":"2022-11-07T17:20:12.449Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1170,"took":"567.8µs"}
{"level":"info","ts":"2022-11-07T17:22:04.280Z","caller":"traceutil/trace.go:171","msg":"trace[111581127] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1458; }","duration":"100.006ms","start":"2022-11-07T17:22:04.180Z","end":"2022-11-07T17:22:04.280Z","steps":["trace[111581127] 'count revisions from in-memory index tree' (duration: 96.1622ms)"],"step_count":1}
{"level":"warn","ts":"2022-11-07T17:24:11.289Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.5219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:341"}
{"level":"info","ts":"2022-11-07T17:24:11.289Z","caller":"traceutil/trace.go:171","msg":"trace[1482763664] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:1546; }","duration":"102.7202ms","start":"2022-11-07T17:24:11.186Z","end":"2022-11-07T17:24:11.289Z","steps":["trace[1482763664] 'range keys from in-memory index tree' (duration: 102.0781ms)"],"step_count":1}
{"level":"info","ts":"2022-11-07T17:25:12.476Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1380}
{"level":"info","ts":"2022-11-07T17:25:12.477Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1380,"took":"664.2µs"}
{"level":"info","ts":"2022-11-07T17:30:12.496Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1591}
{"level":"info","ts":"2022-11-07T17:30:12.498Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1591,"took":"642.8µs"}
{"level":"info","ts":"2022-11-07T17:35:12.513Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1801}
{"level":"info","ts":"2022-11-07T17:35:12.514Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1801,"took":"612.2µs"}
{"level":"info","ts":"2022-11-07T17:40:12.532Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2011}
{"level":"info","ts":"2022-11-07T17:40:12.533Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2011,"took":"655.5µs"}
*
* ==> etcd [ae63504cf46e] <==
* {"level":"info","ts":"2022-11-07T17:03:55.088Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-11-07T17:03:55.091Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
{"level":"info","ts":"2022-11-07T17:03:55.094Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"warn","ts":"2022-11-07T17:04:03.383Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"209.1166ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/functional-170143\" ","response":"range_response_count:1 size:4573"}
{"level":"info","ts":"2022-11-07T17:04:03.383Z","caller":"traceutil/trace.go:171","msg":"trace[716453105] range","detail":"{range_begin:/registry/minions/functional-170143; range_end:; response_count:1; response_revision:418; }","duration":"209.3561ms","start":"2022-11-07T17:04:03.174Z","end":"2022-11-07T17:04:03.383Z","steps":["trace[716453105] 'agreement among raft nodes before linearized reading' (duration: 196.7318ms)"],"step_count":1}
{"level":"warn","ts":"2022-11-07T17:04:03.384Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.0579ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-11-07T17:04:03.384Z","caller":"traceutil/trace.go:171","msg":"trace[286629770] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:0; response_revision:419; }","duration":"111.1276ms","start":"2022-11-07T17:04:03.272Z","end":"2022-11-07T17:04:03.384Z","steps":["trace[286629770] 'agreement among raft nodes before linearized reading' (duration: 110.9985ms)"],"step_count":1}
{"level":"warn","ts":"2022-11-07T17:04:03.384Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.1724ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-11-07T17:04:03.384Z","caller":"traceutil/trace.go:171","msg":"trace[434933487] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:419; }","duration":"103.3641ms","start":"2022-11-07T17:04:03.280Z","end":"2022-11-07T17:04:03.384Z","steps":["trace[434933487] 'agreement among raft nodes before linearized reading' (duration: 103.1491ms)"],"step_count":1}
{"level":"info","ts":"2022-11-07T17:04:03.384Z","caller":"traceutil/trace.go:171","msg":"trace[344659348] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"111.4028ms","start":"2022-11-07T17:04:03.272Z","end":"2022-11-07T17:04:03.384Z","steps":["trace[344659348] 'process raft request' (duration: 98.3363ms)","trace[344659348] 'compare' (duration: 12.1396ms)"],"step_count":2}
{"level":"warn","ts":"2022-11-07T17:04:03.384Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"107.0298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-functional-170143\" ","response":"range_response_count:1 size:5204"}
{"level":"info","ts":"2022-11-07T17:04:03.384Z","caller":"traceutil/trace.go:171","msg":"trace[1930719174] range","detail":"{range_begin:/registry/pods/kube-system/etcd-functional-170143; range_end:; response_count:1; response_revision:419; }","duration":"107.1822ms","start":"2022-11-07T17:04:03.277Z","end":"2022-11-07T17:04:03.384Z","steps":["trace[1930719174] 'agreement among raft nodes before linearized reading' (duration: 106.8497ms)"],"step_count":1}
{"level":"info","ts":"2022-11-07T17:04:03.698Z","caller":"traceutil/trace.go:171","msg":"trace[497007862] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"112.7291ms","start":"2022-11-07T17:04:03.585Z","end":"2022-11-07T17:04:03.698Z","steps":["trace[497007862] 'process raft request' (duration: 84.7532ms)","trace[497007862] 'compare' (duration: 27.6303ms)"],"step_count":2}
{"level":"info","ts":"2022-11-07T17:04:03.699Z","caller":"traceutil/trace.go:171","msg":"trace[1458494014] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"110.3974ms","start":"2022-11-07T17:04:03.589Z","end":"2022-11-07T17:04:03.699Z","steps":["trace[1458494014] 'process raft request' (duration: 109.4941ms)"],"step_count":1}
{"level":"info","ts":"2022-11-07T17:04:03.698Z","caller":"traceutil/trace.go:171","msg":"trace[1978917583] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"104.9236ms","start":"2022-11-07T17:04:03.593Z","end":"2022-11-07T17:04:03.698Z","steps":["trace[1978917583] 'process raft request' (duration: 104.7377ms)"],"step_count":1}
{"level":"warn","ts":"2022-11-07T17:04:03.698Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"120.4195ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-11-07T17:04:03.700Z","caller":"traceutil/trace.go:171","msg":"trace[1974693587] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:426; }","duration":"122.5152ms","start":"2022-11-07T17:04:03.578Z","end":"2022-11-07T17:04:03.700Z","steps":["trace[1974693587] 'agreement among raft nodes before linearized reading' (duration: 92.6023ms)","trace[1974693587] 'range keys from in-memory index tree' (duration: 27.7916ms)"],"step_count":2}
{"level":"info","ts":"2022-11-07T17:04:47.971Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-11-07T17:04:47.971Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"functional-170143","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
WARNING: 2022/11/07 17:04:47 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/11/07 17:04:48 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-11-07T17:04:48.185Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2022-11-07T17:04:48.375Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-11-07T17:04:48.377Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-11-07T17:04:48.377Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"functional-170143","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
*
* ==> kernel <==
* 17:41:49 up 56 min, 0 users, load average: 0.50, 0.67, 0.78
Linux functional-170143 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kube-apiserver [afb78fb3244c] <==
* Trace[1432501637]: ---"Listing from storage done" 779ms (17:07:06.884)
Trace[1432501637]: [780.8709ms] [780.8709ms] END
I1107 17:07:06.885822 1 trace.go:205] Trace[1374766163]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:5c9854e3-cf26-4023-95fe-bd2ee388c743,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (07-Nov-2022 17:07:06.104) (total time: 781ms):
Trace[1374766163]: ---"Listing from storage done" 780ms (17:07:06.884)
Trace[1374766163]: [781.1142ms] [781.1142ms] END
I1107 17:07:29.530796 1 trace.go:205] Trace[1347303991]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:4d6f392c-9162-4ba5-9e90-2d2efb2d0411,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (07-Nov-2022 17:07:28.279) (total time: 1251ms):
Trace[1347303991]: ---"About to write a response" 1250ms (17:07:29.530)
Trace[1347303991]: [1.2511385s] [1.2511385s] END
I1107 17:07:29.530820 1 trace.go:205] Trace[402617218]: "List(recursive=true) etcd3" audit-id:f233fc42-5117-40b5-b9f9-b9e465c97a08,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (07-Nov-2022 17:07:28.105) (total time: 1425ms):
Trace[402617218]: [1.4252376s] [1.4252376s] END
I1107 17:07:29.531458 1 trace.go:205] Trace[672415117]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:f233fc42-5117-40b5-b9f9-b9e465c97a08,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (07-Nov-2022 17:07:28.105) (total time: 1425ms):
Trace[672415117]: ---"Listing from storage done" 1425ms (17:07:29.530)
Trace[672415117]: [1.4259126s] [1.4259126s] END
I1107 17:07:29.532004 1 trace.go:205] Trace[421264066]: "List(recursive=true) etcd3" audit-id:4fcd7a0b-9536-4f38-872b-02fded3f4752,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (07-Nov-2022 17:07:28.105) (total time: 1425ms):
Trace[421264066]: [1.4259525s] [1.4259525s] END
I1107 17:07:29.532883 1 trace.go:205] Trace[68099947]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:4fcd7a0b-9536-4f38-872b-02fded3f4752,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (07-Nov-2022 17:07:28.105) (total time: 1426ms):
Trace[68099947]: ---"Listing from storage done" 1426ms (17:07:29.532)
Trace[68099947]: [1.4268803s] [1.4268803s] END
I1107 17:07:58.113128 1 trace.go:205] Trace[401185032]: "List(recursive=true) etcd3" audit-id:0af106e6-ed5e-486b-bfb1-a245046aeb2a,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (07-Nov-2022 17:07:57.111) (total time: 1001ms):
Trace[401185032]: [1.0017483s] [1.0017483s] END
I1107 17:07:58.113991 1 trace.go:205] Trace[782144908]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:0af106e6-ed5e-486b-bfb1-a245046aeb2a,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (07-Nov-2022 17:07:57.111) (total time: 1002ms):
Trace[782144908]: ---"Listing from storage done" 1001ms (17:07:58.113)
Trace[782144908]: [1.0026453s] [1.0026453s] END
I1107 17:08:17.604879 1 alloc.go:327] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.111.201.194]
I1107 17:08:21.156397 1 alloc.go:327] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.102.223.227]
*
* ==> kube-controller-manager [01399ac93dbc] <==
* I1107 17:04:15.970873 1 shared_informer.go:262] Caches are synced for HPA
I1107 17:04:15.970929 1 shared_informer.go:262] Caches are synced for PVC protection
I1107 17:04:15.972170 1 shared_informer.go:262] Caches are synced for expand
I1107 17:04:15.972286 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
I1107 17:04:15.972189 1 shared_informer.go:262] Caches are synced for endpoint_slice
I1107 17:04:15.972287 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
I1107 17:04:15.972217 1 shared_informer.go:262] Caches are synced for endpoint
I1107 17:04:15.972191 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I1107 17:04:15.972254 1 shared_informer.go:262] Caches are synced for cronjob
I1107 17:04:15.972277 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
I1107 17:04:15.972276 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
I1107 17:04:15.972240 1 shared_informer.go:262] Caches are synced for disruption
I1107 17:04:15.973029 1 shared_informer.go:262] Caches are synced for ephemeral
I1107 17:04:15.975122 1 shared_informer.go:262] Caches are synced for ReplicationController
I1107 17:04:15.978251 1 shared_informer.go:255] Waiting for caches to sync for garbage collector
I1107 17:04:15.979384 1 shared_informer.go:262] Caches are synced for daemon sets
I1107 17:04:15.981994 1 shared_informer.go:262] Caches are synced for deployment
I1107 17:04:15.989398 1 shared_informer.go:262] Caches are synced for ReplicaSet
I1107 17:04:15.995107 1 shared_informer.go:262] Caches are synced for stateful set
I1107 17:04:15.998905 1 shared_informer.go:262] Caches are synced for resource quota
I1107 17:04:16.004545 1 shared_informer.go:262] Caches are synced for certificate-csrapproving
I1107 17:04:16.071854 1 shared_informer.go:262] Caches are synced for resource quota
I1107 17:04:16.375298 1 shared_informer.go:262] Caches are synced for garbage collector
I1107 17:04:16.375389 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1107 17:04:16.379453 1 shared_informer.go:262] Caches are synced for garbage collector
*
* ==> kube-controller-manager [b7062e63f12a] <==
* I1107 17:05:32.775277 1 shared_informer.go:262] Caches are synced for daemon sets
I1107 17:05:32.775624 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
I1107 17:05:32.775928 1 shared_informer.go:262] Caches are synced for taint
I1107 17:05:32.776460 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
I1107 17:05:32.776507 1 taint_manager.go:204] "Starting NoExecuteTaintManager"
I1107 17:05:32.776679 1 event.go:294] "Event occurred" object="functional-170143" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-170143 event: Registered Node functional-170143 in Controller"
W1107 17:05:32.776728 1 node_lifecycle_controller.go:1058] Missing timestamp for Node functional-170143. Assuming now as a timestamp.
I1107 17:05:32.776740 1 taint_manager.go:209] "Sending events to api server"
I1107 17:05:32.776794 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I1107 17:05:32.777253 1 shared_informer.go:262] Caches are synced for endpoint_slice
I1107 17:05:32.779608 1 shared_informer.go:262] Caches are synced for PVC protection
I1107 17:05:32.875846 1 shared_informer.go:262] Caches are synced for attach detach
I1107 17:05:32.879751 1 shared_informer.go:262] Caches are synced for resource quota
I1107 17:05:32.885620 1 shared_informer.go:262] Caches are synced for resource quota
I1107 17:05:33.197215 1 shared_informer.go:262] Caches are synced for garbage collector
I1107 17:05:33.266122 1 shared_informer.go:262] Caches are synced for garbage collector
I1107 17:05:33.266263 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1107 17:05:39.674393 1 event.go:294] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-5fcdfb5cc4 to 1"
I1107 17:05:39.719806 1 event.go:294] "Event occurred" object="default/hello-node-5fcdfb5cc4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-5fcdfb5cc4-9t5g8"
I1107 17:06:01.084587 1 event.go:294] "Event occurred" object="default/mysql" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-596b7fcdbf to 1"
I1107 17:06:01.174875 1 event.go:294] "Event occurred" object="default/mysql-596b7fcdbf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-596b7fcdbf-f99rm"
I1107 17:06:20.187447 1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1107 17:06:20.187609 1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1107 17:08:20.878483 1 event.go:294] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-6458c8fb6f to 1"
I1107 17:08:20.905232 1 event.go:294] "Event occurred" object="default/hello-node-connect-6458c8fb6f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-6458c8fb6f-pstw6"
*
* ==> kube-proxy [b3b6091afc11] <==
* I1107 17:03:53.787423 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I1107 17:03:53.873678 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I1107 17:03:53.877208 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I1107 17:03:53.884338 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
E1107 17:03:53.889731 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-170143": dial tcp 192.168.49.2:8441: connect: connection refused
I1107 17:04:03.387587 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I1107 17:04:03.387840 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I1107 17:04:03.388312 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I1107 17:04:03.570972 1 server_others.go:206] "Using iptables Proxier"
I1107 17:04:03.571143 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I1107 17:04:03.571167 1 server_others.go:214] "Creating dualStackProxier for iptables"
I1107 17:04:03.571192 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I1107 17:04:03.571243 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1107 17:04:03.571820 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1107 17:04:03.572465 1 server.go:661] "Version info" version="v1.25.3"
I1107 17:04:03.572587 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1107 17:04:03.573773 1 config.go:226] "Starting endpoint slice config controller"
I1107 17:04:03.573909 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I1107 17:04:03.574041 1 config.go:317] "Starting service config controller"
I1107 17:04:03.574066 1 shared_informer.go:255] Waiting for caches to sync for service config
I1107 17:04:03.577513 1 config.go:444] "Starting node config controller"
I1107 17:04:03.577759 1 shared_informer.go:255] Waiting for caches to sync for node config
I1107 17:04:03.674979 1 shared_informer.go:262] Caches are synced for service config
I1107 17:04:03.675100 1 shared_informer.go:262] Caches are synced for endpoint slice config
I1107 17:04:03.677946 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-proxy [d333e1a26cba] <==
* I1107 17:05:19.473257 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I1107 17:05:19.477234 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I1107 17:05:19.481134 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I1107 17:05:19.485238 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I1107 17:05:19.488401 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
I1107 17:05:19.673666 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I1107 17:05:19.673733 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I1107 17:05:19.673805 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I1107 17:05:19.975603 1 server_others.go:206] "Using iptables Proxier"
I1107 17:05:19.975723 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I1107 17:05:19.975738 1 server_others.go:214] "Creating dualStackProxier for iptables"
I1107 17:05:19.975757 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I1107 17:05:19.975783 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1107 17:05:19.976445 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1107 17:05:19.976947 1 server.go:661] "Version info" version="v1.25.3"
I1107 17:05:19.977075 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1107 17:05:19.977958 1 config.go:317] "Starting service config controller"
I1107 17:05:19.978109 1 shared_informer.go:255] Waiting for caches to sync for service config
I1107 17:05:19.979522 1 config.go:444] "Starting node config controller"
I1107 17:05:19.979615 1 config.go:226] "Starting endpoint slice config controller"
I1107 17:05:19.979707 1 shared_informer.go:255] Waiting for caches to sync for node config
I1107 17:05:19.979718 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I1107 17:05:20.078373 1 shared_informer.go:262] Caches are synced for service config
I1107 17:05:20.080004 1 shared_informer.go:262] Caches are synced for node config
I1107 17:05:20.080229 1 shared_informer.go:262] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [c93c8ccb82ea] <==
* I1107 17:05:11.320852 1 serving.go:348] Generated self-signed cert in-memory
W1107 17:05:16.571935 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1107 17:05:16.571983 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1107 17:05:16.572007 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W1107 17:05:16.572023 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1107 17:05:16.682584 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I1107 17:05:16.682644 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1107 17:05:16.685249 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1107 17:05:16.685433 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1107 17:05:16.685460 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I1107 17:05:16.685598 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1107 17:05:16.786618 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [ea4cf6560778] <==
* I1107 17:03:55.575272 1 serving.go:348] Generated self-signed cert in-memory
W1107 17:04:03.076340 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1107 17:04:03.076394 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
W1107 17:04:03.076422 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W1107 17:04:03.076439 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1107 17:04:03.188275 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I1107 17:04:03.188385 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1107 17:04:03.190894 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I1107 17:04:03.191003 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1107 17:04:03.190926 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1107 17:04:03.192135 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1107 17:04:03.292762 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1107 17:04:47.880210 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I1107 17:04:47.880502 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E1107 17:04:47.880524 1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
I1107 17:04:47.880607 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
E1107 17:04:47.880721 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Logs begin at Mon 2022-11-07 17:02:24 UTC, end at Mon 2022-11-07 17:41:49 UTC. --
Nov 07 17:08:05 functional-170143 kubelet[9654]: I1107 17:08:05.866894 9654 reconciler.go:399] "Volume detached for volume \"kube-api-access-x4vtv\" (UniqueName: \"kubernetes.io/projected/121c7dc7-8244-410e-a584-a3e68b338d43-kube-api-access-x4vtv\") on node \"functional-170143\" DevicePath \"\""
Nov 07 17:08:05 functional-170143 kubelet[9654]: I1107 17:08:05.867060 9654 reconciler.go:399] "Volume detached for volume \"pvc-285ccea9-bf55-480d-a198-16b12f688a34\" (UniqueName: \"kubernetes.io/host-path/121c7dc7-8244-410e-a584-a3e68b338d43-pvc-285ccea9-bf55-480d-a198-16b12f688a34\") on node \"functional-170143\" DevicePath \"\""
Nov 07 17:08:05 functional-170143 kubelet[9654]: I1107 17:08:05.938730 9654 scope.go:115] "RemoveContainer" containerID="1fe32c7d05e13a0aaffee67e588e3cc96000d4e1d009c29835db307476c28f81"
Nov 07 17:08:06 functional-170143 kubelet[9654]: I1107 17:08:06.077112 9654 scope.go:115] "RemoveContainer" containerID="1fe32c7d05e13a0aaffee67e588e3cc96000d4e1d009c29835db307476c28f81"
Nov 07 17:08:06 functional-170143 kubelet[9654]: E1107 17:08:06.082894 9654 remote_runtime.go:599] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 1fe32c7d05e13a0aaffee67e588e3cc96000d4e1d009c29835db307476c28f81" containerID="1fe32c7d05e13a0aaffee67e588e3cc96000d4e1d009c29835db307476c28f81"
Nov 07 17:08:06 functional-170143 kubelet[9654]: I1107 17:08:06.083284 9654 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:1fe32c7d05e13a0aaffee67e588e3cc96000d4e1d009c29835db307476c28f81} err="failed to get container status \"1fe32c7d05e13a0aaffee67e588e3cc96000d4e1d009c29835db307476c28f81\": rpc error: code = Unknown desc = Error: No such container: 1fe32c7d05e13a0aaffee67e588e3cc96000d4e1d009c29835db307476c28f81"
Nov 07 17:08:06 functional-170143 kubelet[9654]: I1107 17:08:06.493239 9654 topology_manager.go:205] "Topology Admit Handler"
Nov 07 17:08:06 functional-170143 kubelet[9654]: E1107 17:08:06.493365 9654 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="121c7dc7-8244-410e-a584-a3e68b338d43" containerName="myfrontend"
Nov 07 17:08:06 functional-170143 kubelet[9654]: I1107 17:08:06.493413 9654 memory_manager.go:345] "RemoveStaleState removing state" podUID="121c7dc7-8244-410e-a584-a3e68b338d43" containerName="myfrontend"
Nov 07 17:08:06 functional-170143 kubelet[9654]: I1107 17:08:06.681401 9654 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-285ccea9-bf55-480d-a198-16b12f688a34\" (UniqueName: \"kubernetes.io/host-path/f05cebe7-e0b0-4e41-b10d-2b5757c91d06-pvc-285ccea9-bf55-480d-a198-16b12f688a34\") pod \"sp-pod\" (UID: \"f05cebe7-e0b0-4e41-b10d-2b5757c91d06\") " pod="default/sp-pod"
Nov 07 17:08:06 functional-170143 kubelet[9654]: I1107 17:08:06.681622 9654 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sslhp\" (UniqueName: \"kubernetes.io/projected/f05cebe7-e0b0-4e41-b10d-2b5757c91d06-kube-api-access-sslhp\") pod \"sp-pod\" (UID: \"f05cebe7-e0b0-4e41-b10d-2b5757c91d06\") " pod="default/sp-pod"
Nov 07 17:08:07 functional-170143 kubelet[9654]: I1107 17:08:07.801180 9654 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=121c7dc7-8244-410e-a584-a3e68b338d43 path="/var/lib/kubelet/pods/121c7dc7-8244-410e-a584-a3e68b338d43/volumes"
Nov 07 17:08:17 functional-170143 kubelet[9654]: I1107 17:08:17.558714 9654 topology_manager.go:205] "Topology Admit Handler"
Nov 07 17:08:17 functional-170143 kubelet[9654]: I1107 17:08:17.675186 9654 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4txj\" (UniqueName: \"kubernetes.io/projected/fbeae54f-433a-4ae5-a55e-f8bd1d679533-kube-api-access-f4txj\") pod \"nginx-svc\" (UID: \"fbeae54f-433a-4ae5-a55e-f8bd1d679533\") " pod="default/nginx-svc"
Nov 07 17:08:19 functional-170143 kubelet[9654]: I1107 17:08:19.182548 9654 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="836e27720f7919fc04451220be5e197c4cc978a57272c0a5805d9fc304f23bac"
Nov 07 17:08:20 functional-170143 kubelet[9654]: I1107 17:08:20.915784 9654 topology_manager.go:205] "Topology Admit Handler"
Nov 07 17:08:21 functional-170143 kubelet[9654]: I1107 17:08:21.081700 9654 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnzwc\" (UniqueName: \"kubernetes.io/projected/3fbcc286-7635-4244-8d99-7d79df3dd4c8-kube-api-access-jnzwc\") pod \"hello-node-connect-6458c8fb6f-pstw6\" (UID: \"3fbcc286-7635-4244-8d99-7d79df3dd4c8\") " pod="default/hello-node-connect-6458c8fb6f-pstw6"
Nov 07 17:08:22 functional-170143 kubelet[9654]: I1107 17:08:22.976386 9654 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="af36bfdb4c07a363460882876e1a61fd9659cb4337503ac32012f09e077a7574"
Nov 07 17:10:07 functional-170143 kubelet[9654]: W1107 17:10:07.899840 9654 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Nov 07 17:15:07 functional-170143 kubelet[9654]: W1107 17:15:07.902057 9654 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Nov 07 17:20:07 functional-170143 kubelet[9654]: W1107 17:20:07.905196 9654 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Nov 07 17:25:07 functional-170143 kubelet[9654]: W1107 17:25:07.908739 9654 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Nov 07 17:30:07 functional-170143 kubelet[9654]: W1107 17:30:07.911745 9654 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Nov 07 17:35:07 functional-170143 kubelet[9654]: W1107 17:35:07.915024 9654 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Nov 07 17:40:07 functional-170143 kubelet[9654]: W1107 17:40:07.917616 9654 sysinfo.go:203] Nodes topology is not available, providing CPU topology
*
* ==> storage-provisioner [0300249d11be] <==
* I1107 17:05:19.974352 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1107 17:05:20.075744 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1107 17:05:20.075828 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1107 17:05:37.508727 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1107 17:05:37.509275 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-170143_170128c6-60bf-4e39-86d5-21a9bfc3e342!
I1107 17:05:37.509220 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca31cf32-7adf-45c8-a4ba-52aeffd000e3", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-170143_170128c6-60bf-4e39-86d5-21a9bfc3e342 became leader
I1107 17:05:37.610584 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-170143_170128c6-60bf-4e39-86d5-21a9bfc3e342!
I1107 17:06:20.187126 1 controller.go:1332] provision "default/myclaim" class "standard": started
I1107 17:06:20.187394 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard 020c2a1a-59da-42b2-a15b-d15e3c9a4150 388 0 2022-11-07 17:03:19 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-11-07 17:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-285ccea9-bf55-480d-a198-16b12f688a34 &PersistentVolumeClaim{ObjectMeta:{myclaim default 285ccea9-bf55-480d-a198-16b12f688a34 710 0 2022-11-07 17:06:20 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2022-11-07 17:06:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2022-11-07 17:06:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I1107 17:06:20.188008 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"285ccea9-bf55-480d-a198-16b12f688a34", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I1107 17:06:20.188319 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-285ccea9-bf55-480d-a198-16b12f688a34" provisioned
I1107 17:06:20.188351 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I1107 17:06:20.188361 1 volume_store.go:212] Trying to save persistentvolume "pvc-285ccea9-bf55-480d-a198-16b12f688a34"
I1107 17:06:20.207704 1 volume_store.go:219] persistentvolume "pvc-285ccea9-bf55-480d-a198-16b12f688a34" saved
I1107 17:06:20.208007 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"285ccea9-bf55-480d-a198-16b12f688a34", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-285ccea9-bf55-480d-a198-16b12f688a34
*
* ==> storage-provisioner [2ac37c176ed7] <==
* I1107 17:04:09.726847 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1107 17:04:09.781152 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1107 17:04:09.781305 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1107 17:04:27.213490 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1107 17:04:27.213689 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca31cf32-7adf-45c8-a4ba-52aeffd000e3", APIVersion:"v1", ResourceVersion:"536", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-170143_aaeac418-20d8-4c19-a3a8-6d2095862b64 became leader
I1107 17:04:27.213805 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-170143_aaeac418-20d8-4c19-a3a8-6d2095862b64!
I1107 17:04:27.314709 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-170143_aaeac418-20d8-4c19-a3a8-6d2095862b64!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-170143 -n functional-170143
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-170143 -n functional-170143: (1.6587836s)
helpers_test.go:261: (dbg) Run: kubectl --context functional-170143 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context functional-170143 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-170143 describe pod : exit status 1 (182.8818ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context functional-170143 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (2172.86s)