=== RUN TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run: kubectl --context functional-000838 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run: kubectl --context functional-000838 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-dbp6x" [e0ee0ba7-cb31-4d8a-8c8b-14f9922b5fe2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
=== CONT TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-dbp6x" [e0ee0ba7-cb31-4d8a-8c8b-14f9922b5fe2] Running
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 44.1666819s
functional_test.go:1449: (dbg) Run: out/minikube-windows-amd64.exe -p functional-000838 service list
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1449: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 service list: (1.6374202s)
functional_test.go:1463: (dbg) Run: out/minikube-windows-amd64.exe -p functional-000838 service --namespace=default --https --url hello-node
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1392: Failed to sent interrupt to proc not supported by windows
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-000838 service --namespace=default --https --url hello-node: exit status 1 (34m30.0768987s)
-- stdout --
https://127.0.0.1:62685
-- /stdout --
** stderr **
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
** /stderr **
functional_test.go:1465: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-000838 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1402: service test failed - dumping debug information
functional_test.go:1403: -----------------------service failure post-mortem--------------------------------
functional_test.go:1406: (dbg) Run: kubectl --context functional-000838 describe po hello-node
functional_test.go:1410: hello-node pod describe:
Name: hello-node-5fcdfb5cc4-dbp6x
Namespace: default
Priority: 0
Node: functional-000838/192.168.49.2
Start Time: Tue, 25 Oct 2022 00:13:22 +0000
Labels: app=hello-node
pod-template-hash=5fcdfb5cc4
Annotations: <none>
Status: Running
IP: 172.17.0.6
IPs:
IP: 172.17.0.6
Controlled By: ReplicaSet/hello-node-5fcdfb5cc4
Containers:
echoserver:
Container ID: docker://214cc39686ef2ed7eacf7b7e518d301be043c3993ae75d19b63f4f1352ff537d
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 25 Oct 2022 00:13:58 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7fp9b (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-7fp9b:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-5fcdfb5cc4-dbp6x to functional-000838
Normal Pulling 35m kubelet, functional-000838 Pulling image "k8s.gcr.io/echoserver:1.8"
Normal Pulled 34m kubelet, functional-000838 Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 30.5058703s
Normal Created 34m kubelet, functional-000838 Created container echoserver
Normal Started 34m kubelet, functional-000838 Started container echoserver
Name: hello-node-connect-6458c8fb6f-dqbjm
Namespace: default
Priority: 0
Node: functional-000838/192.168.49.2
Start Time: Tue, 25 Oct 2022 00:13:17 +0000
Labels: app=hello-node-connect
pod-template-hash=6458c8fb6f
Annotations: <none>
Status: Running
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/hello-node-connect-6458c8fb6f
Containers:
echoserver:
Container ID: docker://d97f025bc5b7ea69a824106ae128acf751e7d0a24305ff39accbe3b6b5b903f9
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 25 Oct 2022 00:13:57 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l8smv (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-l8smv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-connect-6458c8fb6f-dqbjm to functional-000838
Normal Pulling 35m kubelet, functional-000838 Pulling image "k8s.gcr.io/echoserver:1.8"
Normal Pulled 34m kubelet, functional-000838 Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 35.766734s
Normal Created 34m kubelet, functional-000838 Created container echoserver
Normal Started 34m kubelet, functional-000838 Started container echoserver
functional_test.go:1412: (dbg) Run: kubectl --context functional-000838 logs -l app=hello-node
functional_test.go:1416: hello-node logs:
functional_test.go:1418: (dbg) Run: kubectl --context functional-000838 describe svc hello-node
functional_test.go:1422: hello-node svc describe:
Name: hello-node
Namespace: default
Labels: app=hello-node
Annotations: <none>
Selector: app=hello-node
Type: NodePort
IP: 10.99.199.53
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 32309/TCP
Endpoints: 172.17.0.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect functional-000838
helpers_test.go:235: (dbg) docker inspect functional-000838:
-- stdout --
[
{
"Id": "1a85eb831e6835b6e74e9f72a39c023064bb125f66d9c1b408ef5abe0055952d",
"Created": "2022-10-25T00:09:17.2468862Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 24910,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-10-25T00:09:18.1713589Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:bee7563418bf494c9ba81d904a81ea2c80a1e144325734b9d4b288db23240ab5",
"ResolvConfPath": "/var/lib/docker/containers/1a85eb831e6835b6e74e9f72a39c023064bb125f66d9c1b408ef5abe0055952d/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/1a85eb831e6835b6e74e9f72a39c023064bb125f66d9c1b408ef5abe0055952d/hostname",
"HostsPath": "/var/lib/docker/containers/1a85eb831e6835b6e74e9f72a39c023064bb125f66d9c1b408ef5abe0055952d/hosts",
"LogPath": "/var/lib/docker/containers/1a85eb831e6835b6e74e9f72a39c023064bb125f66d9c1b408ef5abe0055952d/1a85eb831e6835b6e74e9f72a39c023064bb125f66d9c1b408ef5abe0055952d-json.log",
"Name": "/functional-000838",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-000838:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-000838",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4194304000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/fd7ceb1116e13e025437a1ba4e3b40fd2a09d79f0a65ee8af31dddf699ce34e7-init/diff:/var/lib/docker/overlay2/1d72d69c076943d6cd413bc50b6a474779145c6396136b4aef1829c16f4a6d69/diff:/var/lib/docker/overlay2/2712457ef6b3ec08714d64e5261a9b327c3f8db2156d7a1b493340af804c46f1/diff:/var/lib/docker/overlay2/956ad2e584ed04429b79ab0ee4bdc8977af3fcfbab3cc0ed570922cc07ffd0a6/diff:/var/lib/docker/overlay2/c4f80c5076f71429b4266dc613d1850e7295faded99f05e04fcb13d2cb4d3157/diff:/var/lib/docker/overlay2/18b12a09b44604345877d4490348801b993263f747090a3a48eac835ac323d86/diff:/var/lib/docker/overlay2/6ce1e052ac8d5221cb1978a93a4c4d18c74da80e998b6e54246cdc95997a769f/diff:/var/lib/docker/overlay2/9e6e7c177b550c9c4fc4af8222ccc9bfe5b01fa177f08388c541fde750e4df80/diff:/var/lib/docker/overlay2/c56ad1fbd8fd09ba635cb91b82c303fab8be925f82edac48c47ed2b99f054b36/diff:/var/lib/docker/overlay2/b4a229acad56b83bd9d04813f3f4cf0c8c562169b12ef1e88243f4588d0b28f9/diff:/var/lib/docker/overlay2/56f30b
af9b74a7e6afda16e0f90a1863a3db06b5fec5cf06828152edc0faa420/diff:/var/lib/docker/overlay2/4275e6a6be34231198b756601a3b51a1d8446e8830b1c4037b20370047b88b9e/diff:/var/lib/docker/overlay2/0a9f47913b546daa2d558a978beaaa9e1e7e73a568fa1ee9d198e1e2154d3f75/diff:/var/lib/docker/overlay2/f1895cfb690eaa9bf966dd3f040878344a80c0dc3606dd2d5e67d9495cfa3ff8/diff:/var/lib/docker/overlay2/84335bbaf957cb1942f1d774b817e78297dbe5ffeb7e2e406e7492cf5a720c7e/diff:/var/lib/docker/overlay2/d9a26e65c06347ae6f8f306617639febfee5427dffa6d33a6acb3abfc22092fb/diff:/var/lib/docker/overlay2/a6893072e83e913a455da1f55020a69e4cd75c9ca7b9893e47d184eaf0da806d/diff:/var/lib/docker/overlay2/2d4c8dbcc1a6e63159280d831a4e448df4587dae065b53837a0e735e579361c4/diff:/var/lib/docker/overlay2/6fd2d854ad2aede74411487bcfe2f1fa3c4e1bbfad739455a690a5801c7c9d18/diff:/var/lib/docker/overlay2/d8435d49436e1e6d94054688732a28cdf047031ca600d938ab879a3f72791749/diff:/var/lib/docker/overlay2/618bd9835cc6596945db86c2cd23a6ea6c60992ff42cb8ba7a13f96776d79bb3/diff:/var/lib/d
ocker/overlay2/8e9af4c331a1374dad5f203889fa4953cd3111c705011d2f885ce8a3a04daf2c/diff:/var/lib/docker/overlay2/b8b4d702f888aa572be928e4e449cfaed5da2a045d94f145c0d48b2f838a2dc5/diff:/var/lib/docker/overlay2/6b708706c388c674df30fea4b16deb3b96447089d2a1cd5341ef199bd5dc3c4e/diff:/var/lib/docker/overlay2/f3bab3644fefb2215fd7b4b857958be30f575fd080ec37030b8b970e46155cdc/diff:/var/lib/docker/overlay2/809d38d9cc75c39f4eab1c2c64257e010b66f6dd17717a251371701f51b07237/diff:/var/lib/docker/overlay2/b2fc12e35954dea9baf6e418bbc1b629a71863e855e4373e8d665590cd7cbc54/diff:/var/lib/docker/overlay2/34dcaea23605015741cd4c620ce445c935ca6a08892a5aa15165a8422bb013c0/diff:/var/lib/docker/overlay2/4c362976bdb9f18c68d5c294dc08d7939899992ed5f8bb13ab34f58ec03fcdd6/diff:/var/lib/docker/overlay2/316879c125d7c6ab5ddb970715d730f6a9ea41f2b58da1ac9379b1d528a25970/diff:/var/lib/docker/overlay2/241a6ea1a0e862f8ac9d51e14f03999907acd9030349143120fad52b3c1c2b97/diff:/var/lib/docker/overlay2/c64f861002875793ea9a7d58a0e0b96ad95c3c7fb2874b758d4fb1bc26c
34587/diff:/var/lib/docker/overlay2/9b91106560e299e000b1229f3c2774c8ff0b881dbb4a27b80b89d0287f2f581d/diff:/var/lib/docker/overlay2/48a0a6d3a2a4100e68d167121a7df5a2244821b71406e29d5cc8220307ed9847/diff:/var/lib/docker/overlay2/1f280e54c1637034501f87fed8ca123799984880082b190271d5fa183974cb70/diff:/var/lib/docker/overlay2/8b8d91bd6daf07b06612bec716b08ed3d8032a4caa291548eead78a2b2c7e037/diff:/var/lib/docker/overlay2/b3ab8284e9708da3d4a94f3bd549609f23fcc286b4c1522cdb244344a4957bba/diff:/var/lib/docker/overlay2/7cc92644ec11a70cec25faf398c533eaa555c3a0ab3e783bf6f0cb342f18de20/diff:/var/lib/docker/overlay2/7f44e48c3f9293e16b6fedacc411012e83674000293a110908fcbe7b8aa0f56c/diff:/var/lib/docker/overlay2/7ded7fd7dc10119d3c74efa565ab8580571328086d82d5e795e7adcd3276e653/diff:/var/lib/docker/overlay2/b4654f15c85f235a8a9d5b03067d9aacd8d02569b48170551e8cc1fb340698ad/diff:/var/lib/docker/overlay2/901a06d4c922f4dcb994eec1c950879f560844312e104093523c1f1637594c70/diff:/var/lib/docker/overlay2/0fdbbeb11fdbed96bd80868c62d4c13bf887e7
83043225667d2bde711d03b757/diff",
"MergedDir": "/var/lib/docker/overlay2/fd7ceb1116e13e025437a1ba4e3b40fd2a09d79f0a65ee8af31dddf699ce34e7/merged",
"UpperDir": "/var/lib/docker/overlay2/fd7ceb1116e13e025437a1ba4e3b40fd2a09d79f0a65ee8af31dddf699ce34e7/diff",
"WorkDir": "/var/lib/docker/overlay2/fd7ceb1116e13e025437a1ba4e3b40fd2a09d79f0a65ee8af31dddf699ce34e7/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-000838",
"Source": "/var/lib/docker/volumes/functional-000838/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-000838",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-000838",
"name.minikube.sigs.k8s.io": "functional-000838",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "1a6470ca592c3ac6b279c8a6362eddc515dd628db56486031c46ade398fae54d",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "62378"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "62374"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "62375"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "62376"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "62377"
}
]
},
"SandboxKey": "/var/run/docker/netns/1a6470ca592c",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-000838": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"1a85eb831e68",
"functional-000838"
],
"NetworkID": "c737760c84e2aa542619af8def82aa71511352a4e3d9fc646e3fe13e39a09c29",
"EndpointID": "7450f0f71fadf7af41fdb0f71e1702e3d9e51894c1199fe001ff87e41fdfcf84",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-000838 -n functional-000838
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-000838 -n functional-000838: (1.6464908s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-windows-amd64.exe -p functional-000838 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 logs -n 25: (3.1699858s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs:
-- stdout --
*
* ==> Audit <==
* |----------------|----------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|----------------|----------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
| image | functional-000838 image ls | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| ssh | functional-000838 ssh sudo cat | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| | /usr/share/ca-certificates/4200.pem | | | | | |
| service | functional-000838 service | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | |
| | --namespace=default --https | | | | | |
| | --url hello-node | | | | | |
| image | functional-000838 image save --daemon | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| | gcr.io/google-containers/addon-resizer:functional-000838 | | | | | |
| ssh | functional-000838 ssh sudo cat | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| | /etc/ssl/certs/51391683.0 | | | | | |
| docker-env | functional-000838 docker-env | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| ssh | functional-000838 ssh sudo cat | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| | /etc/ssl/certs/42002.pem | | | | | |
| ssh | functional-000838 ssh sudo cat | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| | /usr/share/ca-certificates/42002.pem | | | | | |
| docker-env | functional-000838 docker-env | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| ssh | functional-000838 ssh sudo cat | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| | /etc/ssl/certs/3ec20f2e.0 | | | | | |
| ssh | functional-000838 ssh sudo cat | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| | /etc/test/nested/copy/4200/hosts | | | | | |
| start | -p functional-000838 | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| start | -p functional-000838 | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| start | -p functional-000838 --dry-run | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=docker | | | | | |
| dashboard | --url --port 36195 | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | |
| | -p functional-000838 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| update-context | functional-000838 | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-000838 | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-000838 | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| image | functional-000838 image ls | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| | --format short | | | | | |
| image | functional-000838 image ls | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| | --format yaml | | | | | |
| ssh | functional-000838 ssh pgrep | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | |
| | buildkitd | | | | | |
| image | functional-000838 image build -t | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| | localhost/my-image:functional-000838 | | | | | |
| | testdata\build | | | | | |
| image | functional-000838 image ls | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| image | functional-000838 image ls | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| | --format json | | | | | |
| image | functional-000838 image ls | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
| | --format table | | | | | |
|----------------|----------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/10/25 00:14:19
Running on machine: minikube8
Binary: Built with gc go1.19.2 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1025 00:14:19.481508 12224 out.go:296] Setting OutFile to fd 700 ...
I1025 00:14:19.540111 12224 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 00:14:19.540111 12224 out.go:309] Setting ErrFile to fd 860...
I1025 00:14:19.540111 12224 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 00:14:19.560127 12224 out.go:303] Setting JSON to false
I1025 00:14:19.562113 12224 start.go:116] hostinfo: {"hostname":"minikube8","uptime":6504,"bootTime":1666650355,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
W1025 00:14:19.563117 12224 start.go:124] gopshost.Virtualization returned error: not implemented yet
I1025 00:14:19.567111 12224 out.go:177] * [functional-000838] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
I1025 00:14:19.571126 12224 notify.go:220] Checking for updates...
I1025 00:14:19.573103 12224 out.go:177] - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
I1025 00:14:19.576110 12224 out.go:177] - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
I1025 00:14:19.578114 12224 out.go:177] - MINIKUBE_LOCATION=14956
I1025 00:14:19.581111 12224 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1025 00:14:19.587118 12224 config.go:180] Loaded profile config "functional-000838": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1025 00:14:19.587118 12224 driver.go:362] Setting default libvirt URI to qemu:///system
I1025 00:14:19.879247 12224 docker.go:137] docker version: linux-20.10.17
I1025 00:14:19.887554 12224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1025 00:14:20.588007 12224 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:52 SystemTime:2022-10-25 00:14:20.0839981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
I1025 00:14:20.591987 12224 out.go:177] * Using the docker driver based on existing profile
I1025 00:14:20.594022 12224 start.go:282] selected driver: docker
I1025 00:14:20.595027 12224 start.go:808] validating driver "docker" against &{Name:functional-000838 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-000838 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false re
gistry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1025 00:14:20.595027 12224 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1025 00:14:20.618999 12224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1025 00:14:21.331304 12224 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:52 SystemTime:2022-10-25 00:14:20.8408039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
I1025 00:14:21.399733 12224 cni.go:95] Creating CNI manager for ""
I1025 00:14:21.399733 12224 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1025 00:14:21.399733 12224 start_flags.go:317] config:
{Name:functional-000838 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-000838 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:tru
e storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1025 00:14:21.404747 12224 out.go:177] * dry-run validation complete!
*
* ==> Docker <==
* -- Logs begin at Tue 2022-10-25 00:09:18 UTC, end at Tue 2022-10-25 00:48:43 UTC. --
Oct 25 00:12:08 functional-000838 dockerd[9081]: time="2022-10-25T00:12:08.977068300Z" level=info msg="Loading containers: done."
Oct 25 00:12:09 functional-000838 dockerd[9081]: time="2022-10-25T00:12:09.064063700Z" level=info msg="Docker daemon" commit=e42327a graphdriver(s)=overlay2 version=20.10.18
Oct 25 00:12:09 functional-000838 dockerd[9081]: time="2022-10-25T00:12:09.064262300Z" level=info msg="Daemon has completed initialization"
Oct 25 00:12:09 functional-000838 systemd[1]: Started Docker Application Container Engine.
Oct 25 00:12:09 functional-000838 dockerd[9081]: time="2022-10-25T00:12:09.125288600Z" level=info msg="API listen on [::]:2376"
Oct 25 00:12:09 functional-000838 dockerd[9081]: time="2022-10-25T00:12:09.134632900Z" level=info msg="API listen on /var/run/docker.sock"
Oct 25 00:12:09 functional-000838 dockerd[9081]: time="2022-10-25T00:12:09.836627300Z" level=error msg="Failed to compute size of container rootfs e9479777ed36c57265f8a4fad9798c95c2d9867b8136889fee854abd10442c98: mount does not exist"
Oct 25 00:12:10 functional-000838 dockerd[9081]: time="2022-10-25T00:12:10.420334000Z" level=error msg="981d60152834a5ad4410dbf945579fb4927668b816d88136fbdf62a7dc3bba7b cleanup: failed to delete container from containerd: no such container"
Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.141245400Z" level=info msg="ignoring event" container=a8aa3370c142f36bd0779a2d40a176f2e5c19584ced48c180f87547c86788dd0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.339783200Z" level=info msg="ignoring event" container=399c046a7b6a950e8d0a432671268f11e90395eb8e8a7db942a811169396b615 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.339851400Z" level=info msg="ignoring event" container=ab767e575f80d59c66f2274ae2835f1e55f8e3181a6af2163563f4561f75f6ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.439286600Z" level=info msg="ignoring event" container=665bfac058865946be5e1082a1d6870b5d78fb13c429eb3e081bab8f527485cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.440757300Z" level=info msg="ignoring event" container=f0a04fa890e99ff35febc2e0c7d4dd0473d59dc67b8eb4b8e8ed3babe058ccd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.442000900Z" level=info msg="ignoring event" container=6a5b494850dfbf5f1539b19a54ca8142b427bda9902fbc18c26aa5a8041211c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.442079600Z" level=info msg="ignoring event" container=473c9aff1d582303688f909011d72fd0d42b57e3cba9090382c6bc1593db2079 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.445752500Z" level=info msg="ignoring event" container=bffe81aadcea1811c85eab2ab547df80aa32e96c1ed423976163896ef303a90c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.445821000Z" level=info msg="ignoring event" container=24867b2a72b4564c599a7de438eb2ec6334a0a49a9e9e7a2c7c045bfd6301693 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.538811800Z" level=info msg="ignoring event" container=38b9eed413557adb3ca54bd3f50a9601f4df9517b0d19dcb92ec2539eb4d4013 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.969369700Z" level=info msg="ignoring event" container=b31e2dfc12220919014929ee746ac0c213cf9ed542646598c1758de6ef8429ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 25 00:12:27 functional-000838 dockerd[9081]: time="2022-10-25T00:12:27.274263000Z" level=info msg="ignoring event" container=2e47cf062bdda82b2a85ebcecf4ca93e96ba820a7dd507b084da61fb02aa2806 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 25 00:12:41 functional-000838 dockerd[9081]: time="2022-10-25T00:12:41.041363400Z" level=info msg="ignoring event" container=24e50a9cd0e62f412c05e84da6b21fbffdef92dbcc3f9a64cb4aa630aa3cd929 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 25 00:13:51 functional-000838 dockerd[9081]: time="2022-10-25T00:13:51.848802900Z" level=info msg="ignoring event" container=b191d41a32e23f7a2934dd9918205a78800bec8959920abf8ff0898df10ed2ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 25 00:13:54 functional-000838 dockerd[9081]: time="2022-10-25T00:13:54.240019600Z" level=info msg="ignoring event" container=f41be73d98ce07a145d2bba9403a5c3ccefa15215bbe0df9f105706a949bca4f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 25 00:14:46 functional-000838 dockerd[9081]: time="2022-10-25T00:14:46.552662100Z" level=info msg="ignoring event" container=d0986a7e6203ac1c997ec35509428ee9bec143929a8f5b331eb37762213e8e53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 25 00:14:47 functional-000838 dockerd[9081]: time="2022-10-25T00:14:47.780500900Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
4a3de2d01161a mysql@sha256:f5e2d4d7dccdc3f2a1d592bd3f0eb472b2f72f9fb942a84ff5b5cc049fe63a04 33 minutes ago Running mysql 0 77361eafd49ce
5b1a474395471 nginx@sha256:5ffb682b98b0362b66754387e86b0cd31a5cb7123e49e7f6f6617690900d20b2 34 minutes ago Running myfrontend 0 d862dd2690799
214cc39686ef2 k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 34 minutes ago Running echoserver 0 06ead6c5c54a6
d97f025bc5b7e k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 34 minutes ago Running echoserver 0 54e4350fdd784
5e964d76932be nginx@sha256:bffb4330be734e3268087e28ca51f6ae926f7d4406c7f5b5ab50c5e22570dc32 35 minutes ago Running nginx 0 8c59b60228f4c
36483f466c3d6 beaaf00edd38a 36 minutes ago Running kube-proxy 5 9539d2af31e22
d0d2ae8d106f2 5185b96f0becf 36 minutes ago Running coredns 4 075d4f429d199
d6e59b1da9b5a 6e38f40d628db 36 minutes ago Running storage-provisioner 4 cecc96e23185a
e7c14f019cccc 0346dbd74bcb9 36 minutes ago Running kube-apiserver 0 10e3541eba0a6
c9582e744ed2b a8a176a5d5d69 36 minutes ago Running etcd 5 6624fd49b9e1f
895c338d258d2 6039992312758 36 minutes ago Running kube-controller-manager 4 31c417f2dfe56
575409fc1b630 6d23ec0e8b87e 36 minutes ago Running kube-scheduler 4 aa4d754227f59
bffe81aadcea1 6039992312758 36 minutes ago Exited kube-controller-manager 3 665bfac058865
b31e2dfc12220 6d23ec0e8b87e 36 minutes ago Exited kube-scheduler 3 ab767e575f80d
24867b2a72b45 beaaf00edd38a 36 minutes ago Exited kube-proxy 4 399c046a7b6a9
38b9eed413557 a8a176a5d5d69 36 minutes ago Exited etcd 4 6a5b494850dfb
2e47cf062bdda 5185b96f0becf 36 minutes ago Exited coredns 3 a8aa3370c142f
981d60152834a 6e38f40d628db 36 minutes ago Created storage-provisioner 3 7158d73f202d6
*
* ==> coredns [2e47cf062bdd] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] plugin/health: Going into lameduck mode for 5s
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
[ERROR] plugin/errors: 2 9164478859933691884.4647672910462180950. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
[ERROR] plugin/errors: 2 9164478859933691884.4647672910462180950. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
*
* ==> coredns [d0d2ae8d106f] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
*
* ==> describe nodes <==
* Name: functional-000838
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-000838
kubernetes.io/os=linux
minikube.k8s.io/commit=e51468b57074bb26eb09785222979dd1e5fe9cd4
minikube.k8s.io/name=functional-000838
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_10_25T00_09_55_0700
minikube.k8s.io/version=v1.27.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 25 Oct 2022 00:09:50 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-000838
AcquireTime: <unset>
RenewTime: Tue, 25 Oct 2022 00:48:35 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 25 Oct 2022 00:46:15 +0000 Tue, 25 Oct 2022 00:09:44 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 25 Oct 2022 00:46:15 +0000 Tue, 25 Oct 2022 00:09:44 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 25 Oct 2022 00:46:15 +0000 Tue, 25 Oct 2022 00:09:44 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 25 Oct 2022 00:46:15 +0000 Tue, 25 Oct 2022 00:10:06 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-000838
Capacity:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
System Info:
Machine ID: 18f31d64397c45b9b9d6ac880da4e8a3
System UUID: 18f31d64397c45b9b9d6ac880da4e8a3
Boot ID: 67927c6c-d6bd-41ca-86c3-f57a6a00a497
Kernel Version: 5.10.102.1-microsoft-standard-WSL2
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.18
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-5fcdfb5cc4-dbp6x 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35m
default hello-node-connect-6458c8fb6f-dqbjm 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35m
default mysql-596b7fcdbf-zfh68 600m (3%!)(MISSING) 700m (4%!)(MISSING) 512Mi (0%!)(MISSING) 700Mi (1%!)(MISSING) 34m
default nginx-svc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35m
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 34m
kube-system coredns-565d847f94-4xdpf 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 38m
kube-system etcd-functional-000838 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-apiserver-functional-000838 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 36m
kube-system kube-controller-manager-functional-000838 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-proxy-pr4lp 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-scheduler-functional-000838 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (8%!)(MISSING) 700m (4%!)(MISSING)
memory 682Mi (1%!)(MISSING) 870Mi (1%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 37m kube-proxy
Normal Starting 36m kube-proxy
Normal Starting 38m kube-proxy
Normal NodeHasSufficientMemory 39m (x6 over 39m) kubelet Node functional-000838 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 39m (x5 over 39m) kubelet Node functional-000838 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 39m (x5 over 39m) kubelet Node functional-000838 status is now: NodeHasSufficientPID
Normal Starting 38m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 38m kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 38m kubelet Node functional-000838 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 38m kubelet Node functional-000838 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 38m kubelet Node functional-000838 status is now: NodeHasSufficientMemory
Normal NodeReady 38m kubelet Node functional-000838 status is now: NodeReady
Normal RegisteredNode 38m node-controller Node functional-000838 event: Registered Node functional-000838 in Controller
Normal Starting 37m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 37m (x8 over 37m) kubelet Node functional-000838 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 37m (x8 over 37m) kubelet Node functional-000838 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 37m (x7 over 37m) kubelet Node functional-000838 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 37m kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 37m node-controller Node functional-000838 event: Registered Node functional-000838 in Controller
Normal Starting 36m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 36m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 36m (x8 over 36m) kubelet Node functional-000838 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 36m (x8 over 36m) kubelet Node functional-000838 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 36m (x7 over 36m) kubelet Node functional-000838 status is now: NodeHasSufficientPID
Normal RegisteredNode 35m node-controller Node functional-000838 event: Registered Node functional-000838 in Controller
*
* ==> dmesg <==
* [Oct25 00:23] WSL2: Performing memory compaction.
[Oct25 00:24] WSL2: Performing memory compaction.
[Oct25 00:25] WSL2: Performing memory compaction.
[Oct25 00:26] WSL2: Performing memory compaction.
[Oct25 00:27] WSL2: Performing memory compaction.
[Oct25 00:28] WSL2: Performing memory compaction.
[Oct25 00:30] WSL2: Performing memory compaction.
[Oct25 00:31] WSL2: Performing memory compaction.
[Oct25 00:32] WSL2: Performing memory compaction.
[Oct25 00:33] WSL2: Performing memory compaction.
[Oct25 00:34] WSL2: Performing memory compaction.
[Oct25 00:35] WSL2: Performing memory compaction.
[Oct25 00:36] WSL2: Performing memory compaction.
[Oct25 00:37] WSL2: Performing memory compaction.
[Oct25 00:38] WSL2: Performing memory compaction.
[Oct25 00:39] WSL2: Performing memory compaction.
[Oct25 00:40] WSL2: Performing memory compaction.
[Oct25 00:41] WSL2: Performing memory compaction.
[Oct25 00:42] WSL2: Performing memory compaction.
[Oct25 00:43] WSL2: Performing memory compaction.
[Oct25 00:44] WSL2: Performing memory compaction.
[Oct25 00:45] WSL2: Performing memory compaction.
[Oct25 00:46] WSL2: Performing memory compaction.
[Oct25 00:47] WSL2: Performing memory compaction.
[Oct25 00:48] WSL2: Performing memory compaction.
*
* ==> etcd [38b9eed41355] <==
* {"level":"info","ts":"2022-10-25T00:12:14.937Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-10-25T00:12:14.937Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-10-25T00:12:14.937Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-10-25T00:12:15.949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 4"}
{"level":"info","ts":"2022-10-25T00:12:15.950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 4"}
{"level":"info","ts":"2022-10-25T00:12:15.950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
{"level":"info","ts":"2022-10-25T00:12:15.950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 5"}
{"level":"info","ts":"2022-10-25T00:12:15.950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 5"}
{"level":"info","ts":"2022-10-25T00:12:15.950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 5"}
{"level":"info","ts":"2022-10-25T00:12:15.950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 5"}
{"level":"info","ts":"2022-10-25T00:12:15.955Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-000838 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2022-10-25T00:12:15.955Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-10-25T00:12:15.955Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-10-25T00:12:15.958Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-10-25T00:12:15.959Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-10-25T00:12:15.960Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
{"level":"info","ts":"2022-10-25T00:12:15.960Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-10-25T00:12:18.136Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-10-25T00:12:18.136Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"functional-000838","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
WARNING: 2022/10/25 00:12:18 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/10/25 00:12:18 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-10-25T00:12:18.140Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2022-10-25T00:12:18.235Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-10-25T00:12:18.237Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-10-25T00:12:18.237Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"functional-000838","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
*
* ==> etcd [c9582e744ed2] <==
* {"level":"warn","ts":"2022-10-25T00:15:15.435Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.5752787s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:13520"}
{"level":"warn","ts":"2022-10-25T00:15:15.435Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.1777483s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
{"level":"info","ts":"2022-10-25T00:15:15.435Z","caller":"traceutil/trace.go:171","msg":"trace[643986308] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:826; }","duration":"1.7821465s","start":"2022-10-25T00:15:13.653Z","end":"2022-10-25T00:15:15.435Z","steps":["trace[643986308] 'range keys from in-memory index tree' (duration: 1.7817854s)"],"step_count":1}
{"level":"info","ts":"2022-10-25T00:15:15.435Z","caller":"traceutil/trace.go:171","msg":"trace[1827885195] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:826; }","duration":"1.575417s","start":"2022-10-25T00:15:13.860Z","end":"2022-10-25T00:15:15.435Z","steps":["trace[1827885195] 'range keys from in-memory index tree' (duration: 1.5749272s)"],"step_count":1}
{"level":"warn","ts":"2022-10-25T00:15:15.435Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T00:15:13.653Z","time spent":"1.7823187s","remote":"127.0.0.1:60270","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
{"level":"warn","ts":"2022-10-25T00:15:15.435Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T00:15:13.860Z","time spent":"1.5754913s","remote":"127.0.0.1:60204","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":13544,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
{"level":"info","ts":"2022-10-25T00:15:15.435Z","caller":"traceutil/trace.go:171","msg":"trace[997297366] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:826; }","duration":"2.1778854s","start":"2022-10-25T00:15:13.257Z","end":"2022-10-25T00:15:15.435Z","steps":["trace[997297366] 'range keys from in-memory index tree' (duration: 2.1776036s)"],"step_count":1}
{"level":"warn","ts":"2022-10-25T00:15:15.435Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T00:15:13.257Z","time spent":"2.1780314s","remote":"127.0.0.1:60200","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1141,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
{"level":"warn","ts":"2022-10-25T00:20:39.853Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.4605ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-10-25T00:20:39.853Z","caller":"traceutil/trace.go:171","msg":"trace[807629467] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1060; }","duration":"104.6761ms","start":"2022-10-25T00:20:39.748Z","end":"2022-10-25T00:20:39.853Z","steps":["trace[807629467] 'agreement among raft nodes before linearized reading' (duration: 89.9094ms)"],"step_count":1}
{"level":"info","ts":"2022-10-25T00:22:35.324Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":932}
{"level":"info","ts":"2022-10-25T00:22:35.325Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":932,"took":"1.1828ms"}
{"level":"warn","ts":"2022-10-25T00:26:22.850Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.3562ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-10-25T00:26:22.850Z","caller":"traceutil/trace.go:171","msg":"trace[1400951757] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1301; }","duration":"102.5573ms","start":"2022-10-25T00:26:22.748Z","end":"2022-10-25T00:26:22.850Z","steps":["trace[1400951757] 'agreement among raft nodes before linearized reading' (duration: 93.474ms)"],"step_count":1}
{"level":"info","ts":"2022-10-25T00:26:22.850Z","caller":"traceutil/trace.go:171","msg":"trace[757707022] transaction","detail":"{read_only:false; response_revision:1302; number_of_response:1; }","duration":"101.2465ms","start":"2022-10-25T00:26:22.749Z","end":"2022-10-25T00:26:22.850Z","steps":["trace[757707022] 'process raft request' (duration: 92.1235ms)"],"step_count":1}
{"level":"info","ts":"2022-10-25T00:27:35.341Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1143}
{"level":"info","ts":"2022-10-25T00:27:35.342Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1143,"took":"700.4µs"}
{"level":"info","ts":"2022-10-25T00:32:35.356Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1353}
{"level":"info","ts":"2022-10-25T00:32:35.357Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1353,"took":"602.4µs"}
{"level":"info","ts":"2022-10-25T00:37:35.374Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1564}
{"level":"info","ts":"2022-10-25T00:37:35.376Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1564,"took":"913µs"}
{"level":"info","ts":"2022-10-25T00:42:35.397Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1774}
{"level":"info","ts":"2022-10-25T00:42:35.398Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1774,"took":"481.5µs"}
{"level":"info","ts":"2022-10-25T00:47:35.419Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1984}
{"level":"info","ts":"2022-10-25T00:47:35.420Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1984,"took":"572.1µs"}
*
* ==> kernel <==
* 00:48:44 up 54 min, 0 users, load average: 0.27, 0.42, 0.62
Linux functional-000838 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kube-apiserver [e7c14f019ccc] <==
* I1025 00:14:03.260751 1 trace.go:205] Trace[1714950858]: "List(recursive=true) etcd3" audit-id:a4d00c04-2889-4a12-8475-c756ef3cd8d7,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (25-Oct-2022 00:14:02.659) (total time: 601ms):
Trace[1714950858]: [601.3483ms] [601.3483ms] END
I1025 00:14:03.261396 1 trace.go:205] Trace[271097132]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:a4d00c04-2889-4a12-8475-c756ef3cd8d7,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (25-Oct-2022 00:14:02.659) (total time: 602ms):
Trace[271097132]: ---"Listing from storage done" 601ms (00:14:03.260)
Trace[271097132]: [602.0261ms] [602.0261ms] END
I1025 00:14:16.704975 1 alloc.go:327] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.97.155.131]
I1025 00:14:44.596214 1 trace.go:205] Trace[444787370]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.49.2,type:*v1.Endpoints (25-Oct-2022 00:14:42.552) (total time: 2043ms):
Trace[444787370]: ---"Txn call finished" err:<nil> 2038ms (00:14:44.595)
Trace[444787370]: [2.0438564s] [2.0438564s] END
I1025 00:14:44.596911 1 trace.go:205] Trace[1470228253]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:1f7603d5-9a51-4cc3-82a6-b403934f758f,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (25-Oct-2022 00:14:42.863) (total time: 1733ms):
Trace[1470228253]: ---"About to write a response" 1732ms (00:14:44.596)
Trace[1470228253]: [1.7331419s] [1.7331419s] END
I1025 00:14:44.598763 1 trace.go:205] Trace[538812282]: "List(recursive=true) etcd3" audit-id:25aa4187-f30b-4629-9f02-93bb6f8876a9,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (25-Oct-2022 00:14:42.849) (total time: 1749ms):
Trace[538812282]: [1.7494914s] [1.7494914s] END
I1025 00:14:44.599661 1 trace.go:205] Trace[875701398]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:25aa4187-f30b-4629-9f02-93bb6f8876a9,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (25-Oct-2022 00:14:42.849) (total time: 1750ms):
Trace[875701398]: ---"Listing from storage done" 1749ms (00:14:44.598)
Trace[875701398]: [1.7504719s] [1.7504719s] END
I1025 00:15:15.437032 1 trace.go:205] Trace[1739597116]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:973a0fd3-63ce-40df-8d4d-eb73e94e9513,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (25-Oct-2022 00:15:13.256) (total time: 2180ms):
Trace[1739597116]: ---"About to write a response" 2180ms (00:15:15.436)
Trace[1739597116]: [2.1801976s] [2.1801976s] END
I1025 00:15:15.437190 1 trace.go:205] Trace[1998770013]: "List(recursive=true) etcd3" audit-id:88010bbd-35a5-4e13-99a1-685e2875360e,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (25-Oct-2022 00:15:13.859) (total time: 1578ms):
Trace[1998770013]: [1.5780632s] [1.5780632s] END
I1025 00:15:15.437920 1 trace.go:205] Trace[2047773140]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:88010bbd-35a5-4e13-99a1-685e2875360e,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (25-Oct-2022 00:15:13.859) (total time: 1578ms):
Trace[2047773140]: ---"Listing from storage done" 1578ms (00:15:15.437)
Trace[2047773140]: [1.5788195s] [1.5788195s] END
*
* ==> kube-controller-manager [895c338d258d] <==
* I1025 00:12:54.540868 1 range_allocator.go:166] Starting range CIDR allocator
I1025 00:12:54.540884 1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
I1025 00:12:54.540903 1 shared_informer.go:262] Caches are synced for cidrallocator
I1025 00:12:54.636569 1 shared_informer.go:262] Caches are synced for TTL
I1025 00:12:54.636697 1 shared_informer.go:262] Caches are synced for taint
I1025 00:12:54.636668 1 shared_informer.go:262] Caches are synced for daemon sets
I1025 00:12:54.636800 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
W1025 00:12:54.636926 1 node_lifecycle_controller.go:1058] Missing timestamp for Node functional-000838. Assuming now as a timestamp.
I1025 00:12:54.637060 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I1025 00:12:54.636806 1 taint_manager.go:204] "Starting NoExecuteTaintManager"
I1025 00:12:54.637265 1 taint_manager.go:209] "Sending events to api server"
I1025 00:12:54.637466 1 event.go:294] "Event occurred" object="functional-000838" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-000838 event: Registered Node functional-000838 in Controller"
I1025 00:12:54.637557 1 shared_informer.go:262] Caches are synced for attach detach
I1025 00:12:54.637602 1 shared_informer.go:262] Caches are synced for GC
I1025 00:12:54.637723 1 shared_informer.go:262] Caches are synced for persistent volume
I1025 00:12:54.749034 1 shared_informer.go:262] Caches are synced for garbage collector
I1025 00:12:54.757591 1 shared_informer.go:262] Caches are synced for garbage collector
I1025 00:12:54.757695 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1025 00:13:15.713591 1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1025 00:13:16.973837 1 event.go:294] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-6458c8fb6f to 1"
I1025 00:13:17.049295 1 event.go:294] "Event occurred" object="default/hello-node-connect-6458c8fb6f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-6458c8fb6f-dqbjm"
I1025 00:13:22.140438 1 event.go:294] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-5fcdfb5cc4 to 1"
I1025 00:13:22.235336 1 event.go:294] "Event occurred" object="default/hello-node-5fcdfb5cc4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-5fcdfb5cc4-dbp6x"
I1025 00:14:16.837703 1 event.go:294] "Event occurred" object="default/mysql" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-596b7fcdbf to 1"
I1025 00:14:16.851034 1 event.go:294] "Event occurred" object="default/mysql-596b7fcdbf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-596b7fcdbf-zfh68"
*
* ==> kube-controller-manager [bffe81aadcea] <==
*
*
* ==> kube-proxy [24867b2a72b4] <==
* E1025 00:12:14.549983 1 proxier.go:656] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
I1025 00:12:14.635948 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I1025 00:12:14.639920 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I1025 00:12:14.643460 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I1025 00:12:14.646798 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I1025 00:12:14.650151 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
E1025 00:12:14.735657 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-000838": dial tcp 192.168.49.2:8441: connect: connection refused
E1025 00:12:15.794808 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-000838": dial tcp 192.168.49.2:8441: connect: connection refused
E1025 00:12:18.037197 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-000838": dial tcp 192.168.49.2:8441: connect: connection refused
*
* ==> kube-proxy [36483f466c3d] <==
* I1025 00:12:41.540400 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I1025 00:12:41.545533 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I1025 00:12:41.635618 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I1025 00:12:41.639847 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I1025 00:12:41.642964 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
I1025 00:12:41.836110 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I1025 00:12:41.836268 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I1025 00:12:41.837492 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I1025 00:12:41.947094 1 server_others.go:206] "Using iptables Proxier"
I1025 00:12:41.947332 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I1025 00:12:41.947360 1 server_others.go:214] "Creating dualStackProxier for iptables"
I1025 00:12:41.947377 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I1025 00:12:41.947404 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1025 00:12:41.947863 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1025 00:12:41.948463 1 server.go:661] "Version info" version="v1.25.3"
I1025 00:12:41.948683 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1025 00:12:41.949677 1 config.go:444] "Starting node config controller"
I1025 00:12:41.949885 1 shared_informer.go:255] Waiting for caches to sync for node config
I1025 00:12:41.950180 1 config.go:226] "Starting endpoint slice config controller"
I1025 00:12:41.950428 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I1025 00:12:41.950245 1 config.go:317] "Starting service config controller"
I1025 00:12:41.950837 1 shared_informer.go:255] Waiting for caches to sync for service config
I1025 00:12:42.050002 1 shared_informer.go:262] Caches are synced for node config
I1025 00:12:42.051802 1 shared_informer.go:262] Caches are synced for service config
I1025 00:12:42.051918 1 shared_informer.go:262] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [575409fc1b63] <==
* I1025 00:12:33.972207 1 serving.go:348] Generated self-signed cert in-memory
W1025 00:12:38.839040 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1025 00:12:38.839863 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1025 00:12:38.840014 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W1025 00:12:38.840038 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1025 00:12:39.039502 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I1025 00:12:39.039649 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1025 00:12:39.041788 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I1025 00:12:39.042494 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1025 00:12:39.042600 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1025 00:12:39.042986 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1025 00:12:39.142893 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [b31e2dfc1222] <==
* I1025 00:12:17.038848 1 serving.go:348] Generated self-signed cert in-memory
W1025 00:12:18.926252 1 authentication.go:346] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.49.2:8441: connect: connection refused
W1025 00:12:18.926478 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W1025 00:12:18.926492 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1025 00:12:18.936029 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I1025 00:12:18.936139 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1025 00:12:18.938074 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1025 00:12:18.938204 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I1025 00:12:18.938271 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E1025 00:12:18.938626 1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1025 00:12:18.938777 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1025 00:12:18.938791 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1025 00:12:18.938827 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I1025 00:12:18.938833 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E1025 00:12:18.939676 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Logs begin at Tue 2022-10-25 00:09:18 UTC, end at Tue 2022-10-25 00:48:44 UTC. --
Oct 25 00:13:26 functional-000838 kubelet[11340]: I1025 00:13:26.651326 11340 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="06ead6c5c54a68b62965e310300a50a30bc33bed5e06716b1210b4a45192f9e2"
Oct 25 00:13:55 functional-000838 kubelet[11340]: I1025 00:13:55.456251 11340 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95p55\" (UniqueName: \"kubernetes.io/projected/bfe72ebb-3731-474d-84b6-94b684b4df81-kube-api-access-95p55\") pod \"bfe72ebb-3731-474d-84b6-94b684b4df81\" (UID: \"bfe72ebb-3731-474d-84b6-94b684b4df81\") "
Oct 25 00:13:55 functional-000838 kubelet[11340]: I1025 00:13:55.456513 11340 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"mypd\" (UniqueName: \"kubernetes.io/host-path/bfe72ebb-3731-474d-84b6-94b684b4df81-pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a\") pod \"bfe72ebb-3731-474d-84b6-94b684b4df81\" (UID: \"bfe72ebb-3731-474d-84b6-94b684b4df81\") "
Oct 25 00:13:55 functional-000838 kubelet[11340]: I1025 00:13:55.456609 11340 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe72ebb-3731-474d-84b6-94b684b4df81-pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a" (OuterVolumeSpecName: "mypd") pod "bfe72ebb-3731-474d-84b6-94b684b4df81" (UID: "bfe72ebb-3731-474d-84b6-94b684b4df81"). InnerVolumeSpecName "pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct 25 00:13:55 functional-000838 kubelet[11340]: I1025 00:13:55.460145 11340 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe72ebb-3731-474d-84b6-94b684b4df81-kube-api-access-95p55" (OuterVolumeSpecName: "kube-api-access-95p55") pod "bfe72ebb-3731-474d-84b6-94b684b4df81" (UID: "bfe72ebb-3731-474d-84b6-94b684b4df81"). InnerVolumeSpecName "kube-api-access-95p55". PluginName "kubernetes.io/projected", VolumeGidValue ""
Oct 25 00:13:55 functional-000838 kubelet[11340]: I1025 00:13:55.556931 11340 reconciler.go:399] "Volume detached for volume \"pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a\" (UniqueName: \"kubernetes.io/host-path/bfe72ebb-3731-474d-84b6-94b684b4df81-pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a\") on node \"functional-000838\" DevicePath \"\""
Oct 25 00:13:55 functional-000838 kubelet[11340]: I1025 00:13:55.557062 11340 reconciler.go:399] "Volume detached for volume \"kube-api-access-95p55\" (UniqueName: \"kubernetes.io/projected/bfe72ebb-3731-474d-84b6-94b684b4df81-kube-api-access-95p55\") on node \"functional-000838\" DevicePath \"\""
Oct 25 00:13:56 functional-000838 kubelet[11340]: I1025 00:13:56.459734 11340 scope.go:115] "RemoveContainer" containerID="b191d41a32e23f7a2934dd9918205a78800bec8959920abf8ff0898df10ed2ac"
Oct 25 00:13:57 functional-000838 kubelet[11340]: I1025 00:13:57.752705 11340 topology_manager.go:205] "Topology Admit Handler"
Oct 25 00:13:57 functional-000838 kubelet[11340]: E1025 00:13:57.753027 11340 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="bfe72ebb-3731-474d-84b6-94b684b4df81" containerName="myfrontend"
Oct 25 00:13:57 functional-000838 kubelet[11340]: I1025 00:13:57.753135 11340 memory_manager.go:345] "RemoveStaleState removing state" podUID="bfe72ebb-3731-474d-84b6-94b684b4df81" containerName="myfrontend"
Oct 25 00:13:58 functional-000838 kubelet[11340]: I1025 00:13:58.053255 11340 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft54w\" (UniqueName: \"kubernetes.io/projected/5ff00d85-4bc2-4a5e-a1f1-70f2ca0e059b-kube-api-access-ft54w\") pod \"sp-pod\" (UID: \"5ff00d85-4bc2-4a5e-a1f1-70f2ca0e059b\") " pod="default/sp-pod"
Oct 25 00:13:58 functional-000838 kubelet[11340]: I1025 00:13:58.053417 11340 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a\" (UniqueName: \"kubernetes.io/host-path/5ff00d85-4bc2-4a5e-a1f1-70f2ca0e059b-pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a\") pod \"sp-pod\" (UID: \"5ff00d85-4bc2-4a5e-a1f1-70f2ca0e059b\") " pod="default/sp-pod"
Oct 25 00:13:58 functional-000838 kubelet[11340]: I1025 00:13:58.549137 11340 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=bfe72ebb-3731-474d-84b6-94b684b4df81 path="/var/lib/kubelet/pods/bfe72ebb-3731-474d-84b6-94b684b4df81/volumes"
Oct 25 00:13:59 functional-000838 kubelet[11340]: I1025 00:13:59.997927 11340 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="d862dd26907998346a8a714dcaeca5ed7358430eb37f8d9a197323354d8971f5"
Oct 25 00:14:16 functional-000838 kubelet[11340]: I1025 00:14:16.862276 11340 topology_manager.go:205] "Topology Admit Handler"
Oct 25 00:14:16 functional-000838 kubelet[11340]: I1025 00:14:16.864685 11340 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlwbz\" (UniqueName: \"kubernetes.io/projected/e4191e78-ae83-4272-9aa0-dd3c9d287cf5-kube-api-access-dlwbz\") pod \"mysql-596b7fcdbf-zfh68\" (UID: \"e4191e78-ae83-4272-9aa0-dd3c9d287cf5\") " pod="default/mysql-596b7fcdbf-zfh68"
Oct 25 00:14:18 functional-000838 kubelet[11340]: I1025 00:14:18.323660 11340 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="77361eafd49cea60cb995c77b6e5394a0b5c0da280359b5255b64016d2d21909"
Oct 25 00:17:30 functional-000838 kubelet[11340]: W1025 00:17:30.775959 11340 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 25 00:22:30 functional-000838 kubelet[11340]: W1025 00:22:30.775606 11340 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 25 00:27:30 functional-000838 kubelet[11340]: W1025 00:27:30.781090 11340 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 25 00:32:30 functional-000838 kubelet[11340]: W1025 00:32:30.781452 11340 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 25 00:37:30 functional-000838 kubelet[11340]: W1025 00:37:30.782935 11340 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 25 00:42:30 functional-000838 kubelet[11340]: W1025 00:42:30.850375 11340 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 25 00:47:30 functional-000838 kubelet[11340]: W1025 00:47:30.787218 11340 sysinfo.go:203] Nodes topology is not available, providing CPU topology
*
* ==> storage-provisioner [981d60152834] <==
*
*
* ==> storage-provisioner [d6e59b1da9b5] <==
* I1025 00:12:40.466263 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1025 00:12:40.556682 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1025 00:12:40.556956 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1025 00:12:58.068521 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1025 00:12:58.068895 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-000838_877d2184-e0b3-49f6-a581-c2014f095838!
I1025 00:12:58.068922 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"15a9ecb3-10ed-4a9e-9c32-2b27a682c62c", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-000838_877d2184-e0b3-49f6-a581-c2014f095838 became leader
I1025 00:12:58.169937 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-000838_877d2184-e0b3-49f6-a581-c2014f095838!
I1025 00:13:15.713292 1 controller.go:1332] provision "default/myclaim" class "standard": started
I1025 00:13:15.713566 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard d3344b57-bdc7-476c-9e6d-a3a302f8bda8 382 0 2022-10-25 00:10:13 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-10-25 00:10:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a &PersistentVolumeClaim{ObjectMeta:{myclaim default 74af0df9-4673-4b35-9b41-e6a28e4a469a 640 0 2022-10-25 00:13:15 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2022-10-25 00:13:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2022-10-25 00:13:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I1025 00:13:15.714234 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"74af0df9-4673-4b35-9b41-e6a28e4a469a", APIVersion:"v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I1025 00:13:15.714507 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a" provisioned
I1025 00:13:15.714535 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I1025 00:13:15.714545 1 volume_store.go:212] Trying to save persistentvolume "pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a"
I1025 00:13:15.740558 1 volume_store.go:219] persistentvolume "pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a" saved
I1025 00:13:15.740999 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"74af0df9-4673-4b35-9b41-e6a28e4a469a", APIVersion:"v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-000838 -n functional-000838
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-000838 -n functional-000838: (1.5464067s)
helpers_test.go:261: (dbg) Run: kubectl --context functional-000838 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context functional-000838 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-000838 describe pod : exit status 1 (168.9927ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context functional-000838 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (2125.15s)