=== RUN TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run: kubectl --context functional-165140 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1439: (dbg) Run: kubectl --context functional-165140 expose deployment hello-node --type=NodePort --port=8080
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-xm4pw" [45fae5ae-c987-4fd8-9b7e-081ca8dbdd95] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
=== CONT TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-xm4pw" [45fae5ae-c987-4fd8-9b7e-081ca8dbdd95] Running
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 32.0509224s
functional_test.go:1449: (dbg) Run: out/minikube-windows-amd64.exe -p functional-165140 service list
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1449: (dbg) Done: out/minikube-windows-amd64.exe -p functional-165140 service list: (1.5453009s)
functional_test.go:1463: (dbg) Run: out/minikube-windows-amd64.exe -p functional-165140 service --namespace=default --https --url hello-node
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1392: Failed to sent interrupt to proc not supported by windows
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-165140 service --namespace=default --https --url hello-node: exit status 1 (35m11.2890627s)
-- stdout --
https://127.0.0.1:64831
-- /stdout --
** stderr **
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
** /stderr **
functional_test.go:1465: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-165140 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1402: service test failed - dumping debug information
functional_test.go:1403: -----------------------service failure post-mortem--------------------------------
functional_test.go:1406: (dbg) Run: kubectl --context functional-165140 describe po hello-node
functional_test.go:1410: hello-node pod describe:
Name: hello-node-5fcdfb5cc4-xm4pw
Namespace: default
Priority: 0
Node: functional-165140/192.168.49.2
Start Time: Mon, 31 Oct 2022 16:55:55 +0000
Labels: app=hello-node
pod-template-hash=5fcdfb5cc4
Annotations: <none>
Status: Running
IP: 172.17.0.4
IPs:
IP: 172.17.0.4
Controlled By: ReplicaSet/hello-node-5fcdfb5cc4
Containers:
echoserver:
Container ID: docker://82453f218b4bed663ccd027ab2ff315e3b2cfd80f6e1f7ec83b0a98c0cb5fc9b
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 31 Oct 2022 16:56:20 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j2bm2 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-j2bm2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-5fcdfb5cc4-xm4pw to functional-165140
Normal Pulling 35m kubelet, functional-165140 Pulling image "k8s.gcr.io/echoserver:1.8"
Normal Pulled 35m kubelet, functional-165140 Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 22.8349173s
Normal Created 35m kubelet, functional-165140 Created container echoserver
Normal Started 35m kubelet, functional-165140 Started container echoserver
Name: hello-node-connect-6458c8fb6f-nn4bs
Namespace: default
Priority: 0
Node: functional-165140/192.168.49.2
Start Time: Mon, 31 Oct 2022 16:55:54 +0000
Labels: app=hello-node-connect
pod-template-hash=6458c8fb6f
Annotations: <none>
Status: Running
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
Controlled By: ReplicaSet/hello-node-connect-6458c8fb6f
Containers:
echoserver:
Container ID: docker://9e7ef22d0a5524e7ba1d7d30b2fd284970f837a435e14ce6351975a4e2347ae7
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 31 Oct 2022 16:56:20 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5djwq (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-5djwq:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-connect-6458c8fb6f-nn4bs to functional-165140
Normal Pulling 35m kubelet, functional-165140 Pulling image "k8s.gcr.io/echoserver:1.8"
Normal Pulled 35m kubelet, functional-165140 Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 23.6572738s
Normal Created 35m kubelet, functional-165140 Created container echoserver
Normal Started 35m kubelet, functional-165140 Started container echoserver
functional_test.go:1412: (dbg) Run: kubectl --context functional-165140 logs -l app=hello-node
functional_test.go:1416: hello-node logs:
functional_test.go:1418: (dbg) Run: kubectl --context functional-165140 describe svc hello-node
functional_test.go:1422: hello-node svc describe:
Name: hello-node
Namespace: default
Labels: app=hello-node
Annotations: <none>
Selector: app=hello-node
Type: NodePort
IP: 10.108.104.191
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 32765/TCP
Endpoints: 172.17.0.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect functional-165140
helpers_test.go:235: (dbg) docker inspect functional-165140:
-- stdout --
[
{
"Id": "0f0d96ee71fa8c0bcaa75c1b08acd81c6eb0d31bc0b5913d07af7bff95ecc86c",
"Created": "2022-10-31T16:52:16.96231Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 27173,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-10-31T16:52:17.9676156Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
"ResolvConfPath": "/var/lib/docker/containers/0f0d96ee71fa8c0bcaa75c1b08acd81c6eb0d31bc0b5913d07af7bff95ecc86c/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/0f0d96ee71fa8c0bcaa75c1b08acd81c6eb0d31bc0b5913d07af7bff95ecc86c/hostname",
"HostsPath": "/var/lib/docker/containers/0f0d96ee71fa8c0bcaa75c1b08acd81c6eb0d31bc0b5913d07af7bff95ecc86c/hosts",
"LogPath": "/var/lib/docker/containers/0f0d96ee71fa8c0bcaa75c1b08acd81c6eb0d31bc0b5913d07af7bff95ecc86c/0f0d96ee71fa8c0bcaa75c1b08acd81c6eb0d31bc0b5913d07af7bff95ecc86c-json.log",
"Name": "/functional-165140",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-165140:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-165140",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4194304000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/baca73b3973146e5aee30d5a9292329a04dd615a559e596694a45b24d622546b-init/diff:/var/lib/docker/overlay2/bbea95227f72041bda7d452cde150b91e78b9b2f4e843742441938633b44db6b/diff:/var/lib/docker/overlay2/2f8a20559c1d155b5546746bf2fecc2bc17d0351a35a5f44591da1c394f909a3/diff:/var/lib/docker/overlay2/4de0a5c0f0efde1250d63a14d1caeb451f85c05f1fee55386991590b47a68247/diff:/var/lib/docker/overlay2/9fc1d2ea36b946b55e5136634f55ca4e016d85a6890eff8a823c202cdc6d7e73/diff:/var/lib/docker/overlay2/fe13dda12c0e97f0ecfc2132100e10f6dcbffb22e87cb23401cd8dfa39150adb/diff:/var/lib/docker/overlay2/2de507ecbcda9ffae649f1d069e539c255396710007497c8fe05f77868349c9c/diff:/var/lib/docker/overlay2/2c929e0f0363cc9c77fbf582f231bfd9df85b174d05b5d948117b41c52a7ce33/diff:/var/lib/docker/overlay2/64aed5ce77209c2424be87f7c2384bdb0e810b9f043d090aaef8b0cb82ee47b9/diff:/var/lib/docker/overlay2/f8eba9c17507ad40d5c4e948f98e442ccbcecf4cd6dcf70af8aa5c779fd9333b/diff:/var/lib/docker/overlay2/85a4d9
8dd0003b8b01da268e5d8589c37c70d453e0a28337637f02e6025cb493/diff:/var/lib/docker/overlay2/d53ddd09ee16960cead505bbd900fda51c9974f751764dbd46f8f7ccd56b3fc7/diff:/var/lib/docker/overlay2/3ba182625068dc397ac89a1371feb20ca9993cc98ffcd38fd3c9f82e6dd26d57/diff:/var/lib/docker/overlay2/bd9ab828692bbaec1fa6944ffe52c6b1326a326a5677ed62f3ec6d065abc24c3/diff:/var/lib/docker/overlay2/ca932757b7123d11c167ddf596f00e098a4ff018baa3faa736b82feda40fbd73/diff:/var/lib/docker/overlay2/26587cdb266f648d9a622807558ea5743a398878bc22f35f4fb88e8f84031106/diff:/var/lib/docker/overlay2/226497d3dc5a184be4d3e6fcd67935c09642d33cc06991f2c0a92b6bf25c56cb/diff:/var/lib/docker/overlay2/c9e38045e9922a1f7a3717be78ca6024e6075b06c4e370377aadb235ac4f0a8b/diff:/var/lib/docker/overlay2/9d3851f9810ce1a22a838222967f10886f6c373405c1db84b937bf41fdb87aae/diff:/var/lib/docker/overlay2/11b6bd1ab195a131e114880a1c397c882903e5f8d8425e9e8b2d41100cb97b7f/diff:/var/lib/docker/overlay2/7f952a95b06df05bed94c3ae694108b947f63c4c901c294898cf187aa9c63f82/diff:/var/lib/d
ocker/overlay2/197536bff16ca054d09e1b2a302107c7c857ab315d96fc69dfdcbe4c88e191a2/diff:/var/lib/docker/overlay2/df42460a254c0cb765b22de1cde49d8560cf2d5b809f2e5a2468f3d4e28396e0/diff:/var/lib/docker/overlay2/df79904953fbc0e48f839f86aa13bc349859cf647d2885a60189f61e4a4bc644/diff:/var/lib/docker/overlay2/debbd4555937e032f2ae4303b7faedcbb51a619ae5b65376ca17eeba97a0741c/diff:/var/lib/docker/overlay2/ef55e95664ec3ee65b2247696033a56776cc80b368d8c981429302f140b67509/diff:/var/lib/docker/overlay2/6b9e5e763da0d2706152dd435ebde0e3fc4f32890b1711b7dd14e6c22f2b0f4f/diff:/var/lib/docker/overlay2/694101d8a808fff9d365b4676c5dc399b3dfd2340423439fac8b320607d43166/diff:/var/lib/docker/overlay2/94a8d9bc1bf4b5b1622f3a139e536609805de5c1eaefef1b3a64e867314e997b/diff:/var/lib/docker/overlay2/d2c7aa879bea9ea770bb900f6c833b2a94693ed3ecee4908087f38a826c25fa4/diff:/var/lib/docker/overlay2/f9b40714d6832eab2b0d62524a0e3b24c75cc1666bbcb0c28f370c309e48e7fe/diff:/var/lib/docker/overlay2/ff72c4442747c5d0a04f6f8f7dab41f4528a468ae9a825a1d10efb74990
4e3ab/diff:/var/lib/docker/overlay2/fcb4fbc7f9aaadaed7bd8efadd57accdff412eeae02521ec2ab1fd452e44195c/diff:/var/lib/docker/overlay2/675f9cb74f3b16a3f13725622b76ffe3d096f6ae3e73c3399f17cac8ea00b1ba/diff:/var/lib/docker/overlay2/6846a1ae4d09b60165b20f64eb096e83c91dd3c3d9ec916a6eb3528d41b68dd5/diff:/var/lib/docker/overlay2/f89766bdd390e4cc0e013f9ba02db3291ccda68cdc3713defad3f39f1598b773/diff:/var/lib/docker/overlay2/63ac57c86e8123b639d2809109b501bd0f59ea0c2e8fd40e70b4bb54056d8578/diff:/var/lib/docker/overlay2/2c0b8906e737ba0a7288d7a229af9a9994b9d0c2e2a67ed076f72bef426df50e/diff:/var/lib/docker/overlay2/ec7bd661e84051e7a0b247078237e4d56e974f512a3e3519a6d0d0971ff8adab/diff:/var/lib/docker/overlay2/d87e9e90f6655ecc154c1549c51ea83ad56205c4c518040544c348813d297b21/diff:/var/lib/docker/overlay2/ee0342ddc8da02aca9ffe7d232d9b502bc97ef5a35437d4937bcad14c11ed599/diff:/var/lib/docker/overlay2/92b2377ea3f8107446919f10e6eb8cc69e7a5a67a2e8e5d7ffb63150afc4440a/diff:/var/lib/docker/overlay2/898c9439252c6c9605a50b0f9966f346e09ac9
ea4dafcec2906dd6f0d59f0c93/diff",
"MergedDir": "/var/lib/docker/overlay2/baca73b3973146e5aee30d5a9292329a04dd615a559e596694a45b24d622546b/merged",
"UpperDir": "/var/lib/docker/overlay2/baca73b3973146e5aee30d5a9292329a04dd615a559e596694a45b24d622546b/diff",
"WorkDir": "/var/lib/docker/overlay2/baca73b3973146e5aee30d5a9292329a04dd615a559e596694a45b24d622546b/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-165140",
"Source": "/var/lib/docker/volumes/functional-165140/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-165140",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-165140",
"name.minikube.sigs.k8s.io": "functional-165140",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "521129ce4e6b22869776891de5a8bd2c37e40246c3634ba0f32c167714f54223",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "64580"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "64576"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "64577"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "64578"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "64579"
}
]
},
"SandboxKey": "/var/run/docker/netns/521129ce4e6b",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-165140": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"0f0d96ee71fa",
"functional-165140"
],
"NetworkID": "c3a627ecd49d0c09a621ab0d298c5dd74a6f8dcc19eac839fe0a2a2b3bbc1cb9",
"EndpointID": "18b1cd9cf453f9c20eca4e9ef69411d738d4e3c9fb8e84386b7230da80a0c597",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-165140 -n functional-165140
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-165140 -n functional-165140: (1.535442s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-windows-amd64.exe -p functional-165140 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-165140 logs -n 25: (3.9557657s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs:
-- stdout --
*
* ==> Audit <==
* |----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
| image | functional-165140 image save | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:56 GMT | 31 Oct 22 16:56 GMT |
| | gcr.io/google-containers/addon-resizer:functional-165140 | | | | | |
| | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar | | | | | |
| ssh | functional-165140 ssh sudo cat | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:56 GMT | 31 Oct 22 16:56 GMT |
| | /usr/share/ca-certificates/8936.pem | | | | | |
| ssh | functional-165140 ssh sudo cat | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:56 GMT | 31 Oct 22 16:56 GMT |
| | /etc/ssl/certs/51391683.0 | | | | | |
| ssh | functional-165140 ssh sudo cat | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:56 GMT | 31 Oct 22 16:56 GMT |
| | /etc/ssl/certs/89362.pem | | | | | |
| image | functional-165140 image rm | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:56 GMT | 31 Oct 22 16:56 GMT |
| | gcr.io/google-containers/addon-resizer:functional-165140 | | | | | |
| ssh | functional-165140 ssh sudo cat | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:56 GMT | 31 Oct 22 16:56 GMT |
| | /usr/share/ca-certificates/89362.pem | | | | | |
| image | functional-165140 image ls | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:56 GMT | 31 Oct 22 16:56 GMT |
| ssh | functional-165140 ssh sudo cat | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:56 GMT | 31 Oct 22 16:56 GMT |
| | /etc/ssl/certs/3ec20f2e.0 | | | | | |
| image | functional-165140 image load | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:56 GMT | 31 Oct 22 16:56 GMT |
| | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar | | | | | |
| tunnel | functional-165140 tunnel | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:56 GMT | |
| | --alsologtostderr | | | | | |
| image | functional-165140 image ls | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:56 GMT | 31 Oct 22 16:56 GMT |
| image | functional-165140 image save --daemon | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:56 GMT | 31 Oct 22 16:57 GMT |
| | gcr.io/google-containers/addon-resizer:functional-165140 | | | | | |
| ssh | functional-165140 ssh sudo cat | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:57 GMT | 31 Oct 22 16:57 GMT |
| | /etc/test/nested/copy/8936/hosts | | | | | |
| ssh | functional-165140 ssh echo | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:57 GMT | 31 Oct 22 16:57 GMT |
| | hello | | | | | |
| ssh | functional-165140 ssh cat | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:57 GMT | 31 Oct 22 16:57 GMT |
| | /etc/hostname | | | | | |
| update-context | functional-165140 | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:58 GMT | 31 Oct 22 16:59 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-165140 | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:59 GMT | 31 Oct 22 16:59 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-165140 | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:59 GMT | 31 Oct 22 16:59 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| image | functional-165140 image ls | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:59 GMT | 31 Oct 22 16:59 GMT |
| | --format short | | | | | |
| image | functional-165140 image ls | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:59 GMT | 31 Oct 22 16:59 GMT |
| | --format yaml | | | | | |
| ssh | functional-165140 ssh pgrep | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:59 GMT | |
| | buildkitd | | | | | |
| image | functional-165140 image ls | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:59 GMT | 31 Oct 22 16:59 GMT |
| | --format json | | | | | |
| image | functional-165140 image ls | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:59 GMT | 31 Oct 22 16:59 GMT |
| | --format table | | | | | |
| image | functional-165140 image build -t | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:59 GMT | 31 Oct 22 16:59 GMT |
| | localhost/my-image:functional-165140 | | | | | |
| | testdata\build | | | | | |
| image | functional-165140 image ls | functional-165140 | minikube8\jenkins | v1.27.1 | 31 Oct 22 16:59 GMT | 31 Oct 22 16:59 GMT |
|----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/10/31 16:56:17
Running on machine: minikube8
Binary: Built with gc go1.19.2 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1031 16:56:17.936994 6908 out.go:296] Setting OutFile to fd 888 ...
I1031 16:56:17.998584 6908 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 16:56:17.998584 6908 out.go:309] Setting ErrFile to fd 1012...
I1031 16:56:17.998584 6908 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 16:56:18.015586 6908 out.go:303] Setting JSON to false
I1031 16:56:18.018589 6908 start.go:116] hostinfo: {"hostname":"minikube8","uptime":2023,"bootTime":1667233355,"procs":150,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
W1031 16:56:18.018589 6908 start.go:124] gopshost.Virtualization returned error: not implemented yet
I1031 16:56:18.022572 6908 out.go:177] * [functional-165140] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
I1031 16:56:18.025572 6908 notify.go:220] Checking for updates...
I1031 16:56:18.027576 6908 out.go:177] - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
I1031 16:56:18.030578 6908 out.go:177] - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
I1031 16:56:18.032573 6908 out.go:177] - MINIKUBE_LOCATION=15232
I1031 16:56:18.034588 6908 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1031 16:56:18.040601 6908 config.go:180] Loaded profile config "functional-165140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1031 16:56:18.041728 6908 driver.go:365] Setting default libvirt URI to qemu:///system
I1031 16:56:18.352874 6908 docker.go:137] docker version: linux-20.10.20
I1031 16:56:18.362315 6908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1031 16:56:19.096385 6908 info.go:266] docker info: {ID:UKWT:QOHC:WUVS:FALO:LQWG:XCZB:BA37:G2YX:YDIB:NZMI:6ZPJ:J4HC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:55 SystemTime:2022-10-31 16:56:18.5084732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1031 16:56:19.343818 6908 out.go:177] * Using the docker driver based on existing profile
I1031 16:56:19.458296 6908 start.go:282] selected driver: docker
I1031 16:56:19.459264 6908 start.go:808] validating driver "docker" against &{Name:functional-165140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-165140 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fals
e portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1031 16:56:19.459611 6908 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1031 16:56:19.475418 6908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1031 16:56:20.204360 6908 info.go:266] docker info: {ID:UKWT:QOHC:WUVS:FALO:LQWG:XCZB:BA37:G2YX:YDIB:NZMI:6ZPJ:J4HC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:55 SystemTime:2022-10-31 16:56:19.6549297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1031 16:56:20.261368 6908 cni.go:95] Creating CNI manager for ""
I1031 16:56:20.261368 6908 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1031 16:56:20.261368 6908 start_flags.go:317] config:
{Name:functional-165140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-165140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false sto
rage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1031 16:56:20.265350 6908 out.go:177] * dry-run validation complete!
*
* ==> Docker <==
* -- Logs begin at Mon 2022-10-31 16:52:18 UTC, end at Mon 2022-10-31 17:31:45 UTC. --
Oct 31 16:54:59 functional-165140 dockerd[8011]: time="2022-10-31T16:54:59.473669400Z" level=info msg="Loading containers: done."
Oct 31 16:54:59 functional-165140 dockerd[8011]: time="2022-10-31T16:54:59.566391000Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
Oct 31 16:54:59 functional-165140 dockerd[8011]: time="2022-10-31T16:54:59.566598100Z" level=info msg="Daemon has completed initialization"
Oct 31 16:54:59 functional-165140 systemd[1]: Started Docker Application Container Engine.
Oct 31 16:54:59 functional-165140 dockerd[8011]: time="2022-10-31T16:54:59.626743700Z" level=info msg="API listen on [::]:2376"
Oct 31 16:54:59 functional-165140 dockerd[8011]: time="2022-10-31T16:54:59.634083200Z" level=info msg="API listen on /var/run/docker.sock"
Oct 31 16:55:00 functional-165140 dockerd[8011]: time="2022-10-31T16:55:00.045977500Z" level=error msg="Failed to compute size of container rootfs 19af2bea92df01855ab2ff3e8a6aa617cb53aa1f866250e0707c4c72d7ec0d76: mount does not exist"
Oct 31 16:55:00 functional-165140 dockerd[8011]: time="2022-10-31T16:55:00.230147700Z" level=error msg="Failed to compute size of container rootfs 1196f5157faa59c14a2d5de5273a6dc1fb8699f963caced453b52489a60829f1: mount does not exist"
Oct 31 16:55:08 functional-165140 dockerd[8011]: time="2022-10-31T16:55:08.839710800Z" level=info msg="ignoring event" container=e228fbac0515ff8f716cdea6cf371978a1fc240d618302149c92941c5b7cc0c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 16:55:08 functional-165140 dockerd[8011]: time="2022-10-31T16:55:08.839822000Z" level=info msg="ignoring event" container=7a2b881ad9c85bac3eed183e09174dd2603f628cf86243097bd0d3d9c27798f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 16:55:09 functional-165140 dockerd[8011]: time="2022-10-31T16:55:08.928118800Z" level=info msg="ignoring event" container=9f2906e5290d420a3b12d6ccd2837b59b51f903721c8a1eae1ff02b174ccdfb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 16:55:09 functional-165140 dockerd[8011]: time="2022-10-31T16:55:08.928637300Z" level=info msg="ignoring event" container=7e49b5991a829bb05412e9f63282f31f148d4aaa5909974472f5e73664a160ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 16:55:09 functional-165140 dockerd[8011]: time="2022-10-31T16:55:08.928716800Z" level=info msg="ignoring event" container=b6716ea33481b1d8aa94347eca4dc6c00707ad0b66c91c51d5e01572e3e8103a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 16:55:09 functional-165140 dockerd[8011]: time="2022-10-31T16:55:08.929928200Z" level=info msg="ignoring event" container=8aa1a70696b73a414bc92a39b9fe3b57c1010cd2285d29530d86e81e2a0c953d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 16:55:09 functional-165140 dockerd[8011]: time="2022-10-31T16:55:08.930235100Z" level=info msg="ignoring event" container=e77271c0eadb29760272ea64a0b5e95dd08605a2c5fa709d7f99225a49ca7a70 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 16:55:09 functional-165140 dockerd[8011]: time="2022-10-31T16:55:08.930277600Z" level=info msg="ignoring event" container=69149ea8d7ffc9a1a02664990c1378a1a82d7469793823c12085508de0396cfe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 16:55:09 functional-165140 dockerd[8011]: time="2022-10-31T16:55:08.930718200Z" level=info msg="ignoring event" container=3ddb50f82cc21916d3090071196435f6e18adf00aefe73fdaad07cd087f91a27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 16:55:09 functional-165140 dockerd[8011]: time="2022-10-31T16:55:09.033039400Z" level=info msg="ignoring event" container=5f09f01e78c3f6136dfd8eac0287324ddc83f43ac864774ea0aa54454e0a571a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 16:55:09 functional-165140 dockerd[8011]: time="2022-10-31T16:55:09.127450000Z" level=info msg="ignoring event" container=f15bd3911db879c6d0a26a140c1d233ecbee85bd6eb016aabc9804b3c64a13e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 16:55:17 functional-165140 dockerd[8011]: time="2022-10-31T16:55:17.974445400Z" level=info msg="ignoring event" container=b644bc4292c19788eca0833e0117a860ce73be6bc7930ccb12fd816d9828ec2e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 16:55:31 functional-165140 dockerd[8011]: time="2022-10-31T16:55:31.432009000Z" level=info msg="ignoring event" container=472b6b26a2c5b40a549bae65e3cbb60c6f355512391d67c1e09c2f42c3334df1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 16:58:50 functional-165140 dockerd[8011]: time="2022-10-31T16:58:50.575556600Z" level=info msg="ignoring event" container=a3f3a738879409b0aeb4a3b1008f71852174e491107d52fc4345bbfb1f2d70e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 16:58:50 functional-165140 dockerd[8011]: time="2022-10-31T16:58:50.722361700Z" level=info msg="ignoring event" container=dc1926d67f88705eeebd2feecf0dd0e0646b79791e279b5a3d0871d220316709 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 16:59:09 functional-165140 dockerd[8011]: time="2022-10-31T16:59:09.623967800Z" level=info msg="ignoring event" container=c97b1e4c7d505eda28b164141e4c963577ed97f346e7be7a9e55c1cbe7d8e412 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 16:59:10 functional-165140 dockerd[8011]: time="2022-10-31T16:59:10.236203400Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
a179099da40e1 nginx@sha256:943c25b4b66b332184d5ba6bb18234273551593016c0e0ae906bab111548239f 32 minutes ago Running myfrontend 0 25e3e989ac9e9
c289a52fa65bf mysql@sha256:f5e2d4d7dccdc3f2a1d592bd3f0eb472b2f72f9fb942a84ff5b5cc049fe63a04 33 minutes ago Running mysql 0 e91ee7bfbfd14
620b690635e34 nginx@sha256:2452715dd322b3273419652b7721b64aa60305f606ef7a674ae28b6f12d155a3 34 minutes ago Running nginx 0 f203e2f623fa2
82453f218b4be k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 35 minutes ago Running echoserver 0 1250db6fe5fc6
9e7ef22d0a552 k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 35 minutes ago Running echoserver 0 9d113d39ea941
50fd339ceebc3 beaaf00edd38a 36 minutes ago Running kube-proxy 3 49b86cebc8dd4
9988457f7e6b5 5185b96f0becf 36 minutes ago Running coredns 3 854ceb95c0134
314966124e122 6e38f40d628db 36 minutes ago Running storage-provisioner 4 79d064279bd51
802191fe01345 0346dbd74bcb9 36 minutes ago Running kube-apiserver 0 bdbe5ac2d88cf
c5de9a7583160 6039992312758 36 minutes ago Running kube-controller-manager 4 6ef8bcd492a51
4ac99dd6d1c38 6d23ec0e8b87e 36 minutes ago Running kube-scheduler 3 3d00a177a20f3
cbf604c4e01ee a8a176a5d5d69 36 minutes ago Running etcd 3 3aed3c6fb0b27
7e49b5991a829 6039992312758 36 minutes ago Exited kube-controller-manager 3 69149ea8d7ffc
f15bd3911db87 a8a176a5d5d69 36 minutes ago Exited etcd 2 5f09f01e78c3f
8aa1a70696b73 6d23ec0e8b87e 36 minutes ago Exited kube-scheduler 2 e77271c0eadb2
b644bc4292c19 5185b96f0becf 36 minutes ago Exited coredns 2 9f2906e5290d4
e228fbac0515f beaaf00edd38a 36 minutes ago Exited kube-proxy 2 3ddb50f82cc21
38b4442cb1002 6e38f40d628db 37 minutes ago Exited storage-provisioner 3 defdb94c2d3e4
*
* ==> coredns [9988457f7e6b] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
*
* ==> coredns [b644bc4292c1] <==
* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] plugin/health: Going into lameduck mode for 5s
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
[ERROR] plugin/errors: 2 1817129320684331616.652476851248818241. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
[ERROR] plugin/errors: 2 1817129320684331616.652476851248818241. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
*
* ==> describe nodes <==
* Name: functional-165140
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-165140
kubernetes.io/os=linux
minikube.k8s.io/commit=2e5adf9ee40d3190a65d3fa843a253d73ae4fdf3
minikube.k8s.io/name=functional-165140
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_10_31T16_52_55_0700
minikube.k8s.io/version=v1.27.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 31 Oct 2022 16:52:50 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-165140
AcquireTime: <unset>
RenewTime: Mon, 31 Oct 2022 17:31:44 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 31 Oct 2022 17:30:04 +0000 Mon, 31 Oct 2022 16:52:50 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 31 Oct 2022 17:30:04 +0000 Mon, 31 Oct 2022 16:52:50 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 31 Oct 2022 17:30:04 +0000 Mon, 31 Oct 2022 16:52:50 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 31 Oct 2022 17:30:04 +0000 Mon, 31 Oct 2022 16:53:06 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-165140
Capacity:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
System Info:
Machine ID: 996614ec4c814b87b7ec8ebee3d0e8c9
System UUID: 996614ec4c814b87b7ec8ebee3d0e8c9
Boot ID: 939d3759-7bbe-47bb-b9e0-6a6f2490533a
Kernel Version: 5.10.102.1-microsoft-standard-WSL2
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.20
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-5fcdfb5cc4-xm4pw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35m
default hello-node-connect-6458c8fb6f-nn4bs 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35m
default mysql-596b7fcdbf-lghmx 600m (3%!)(MISSING) 700m (4%!)(MISSING) 512Mi (0%!)(MISSING) 700Mi (1%!)(MISSING) 34m
default nginx-svc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 34m
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 32m
kube-system coredns-565d847f94-qrfqf 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 38m
kube-system etcd-functional-165140 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-apiserver-functional-165140 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 36m
kube-system kube-controller-manager-functional-165140 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-proxy-cjxzm 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-scheduler-functional-165140 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (8%!)(MISSING) 700m (4%!)(MISSING)
memory 682Mi (1%!)(MISSING) 870Mi (1%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 38m kube-proxy
Normal Starting 36m kube-proxy
Normal Starting 37m kube-proxy
Normal NodeHasSufficientMemory 39m (x5 over 39m) kubelet Node functional-165140 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 39m (x5 over 39m) kubelet Node functional-165140 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 39m (x5 over 39m) kubelet Node functional-165140 status is now: NodeHasSufficientPID
Normal Starting 38m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 38m kubelet Node functional-165140 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 38m kubelet Node functional-165140 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 38m kubelet Node functional-165140 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 38m kubelet Updated Node Allocatable limit across pods
Normal NodeReady 38m kubelet Node functional-165140 status is now: NodeReady
Normal RegisteredNode 38m node-controller Node functional-165140 event: Registered Node functional-165140 in Controller
Normal RegisteredNode 37m node-controller Node functional-165140 event: Registered Node functional-165140 in Controller
Normal Starting 36m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 36m (x8 over 36m) kubelet Node functional-165140 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 36m (x8 over 36m) kubelet Node functional-165140 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 36m (x7 over 36m) kubelet Node functional-165140 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 36m kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 35m node-controller Node functional-165140 event: Registered Node functional-165140 in Controller
*
* ==> dmesg <==
* [Oct31 17:06] WSL2: Performing memory compaction.
[Oct31 17:07] WSL2: Performing memory compaction.
[Oct31 17:08] WSL2: Performing memory compaction.
[Oct31 17:09] WSL2: Performing memory compaction.
[Oct31 17:10] WSL2: Performing memory compaction.
[Oct31 17:11] WSL2: Performing memory compaction.
[Oct31 17:12] WSL2: Performing memory compaction.
[Oct31 17:13] WSL2: Performing memory compaction.
[Oct31 17:14] WSL2: Performing memory compaction.
[Oct31 17:15] WSL2: Performing memory compaction.
[Oct31 17:16] WSL2: Performing memory compaction.
[Oct31 17:17] WSL2: Performing memory compaction.
[Oct31 17:18] WSL2: Performing memory compaction.
[Oct31 17:19] WSL2: Performing memory compaction.
[Oct31 17:20] WSL2: Performing memory compaction.
[Oct31 17:21] WSL2: Performing memory compaction.
[Oct31 17:22] WSL2: Performing memory compaction.
[Oct31 17:23] WSL2: Performing memory compaction.
[Oct31 17:24] WSL2: Performing memory compaction.
[Oct31 17:25] WSL2: Performing memory compaction.
[Oct31 17:26] WSL2: Performing memory compaction.
[Oct31 17:27] WSL2: Performing memory compaction.
[Oct31 17:28] WSL2: Performing memory compaction.
[Oct31 17:29] WSL2: Performing memory compaction.
[Oct31 17:31] WSL2: Performing memory compaction.
*
* ==> etcd [cbf604c4e01e] <==
* {"level":"info","ts":"2022-10-31T16:58:41.845Z","caller":"traceutil/trace.go:171","msg":"trace[663359478] range","detail":"{range_begin:/registry/pods/default/mysql-596b7fcdbf-lghmx; range_end:; response_count:1; response_revision:843; }","duration":"348.2339ms","start":"2022-10-31T16:58:41.496Z","end":"2022-10-31T16:58:41.844Z","steps":["trace[663359478] 'range keys from in-memory index tree' (duration: 347.8671ms)"],"step_count":1}
{"level":"warn","ts":"2022-10-31T16:58:41.845Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-31T16:58:41.496Z","time spent":"348.3069ms","remote":"127.0.0.1:51350","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":2925,"request content":"key:\"/registry/pods/default/mysql-596b7fcdbf-lghmx\" "}
{"level":"warn","ts":"2022-10-31T16:58:41.844Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"470.5071ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:13523"}
{"level":"info","ts":"2022-10-31T16:58:41.845Z","caller":"traceutil/trace.go:171","msg":"trace[1285750148] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:843; }","duration":"471.0291ms","start":"2022-10-31T16:58:41.374Z","end":"2022-10-31T16:58:41.845Z","steps":["trace[1285750148] 'range keys from in-memory index tree' (duration: 470.3696ms)"],"step_count":1}
{"level":"warn","ts":"2022-10-31T16:58:41.845Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-31T16:58:41.374Z","time spent":"471.0786ms","remote":"127.0.0.1:51350","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":13547,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
{"level":"warn","ts":"2022-10-31T16:58:42.542Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128016764763020525,"retry-timeout":"500ms"}
{"level":"info","ts":"2022-10-31T16:58:42.559Z","caller":"traceutil/trace.go:171","msg":"trace[2028611927] linearizableReadLoop","detail":"{readStateIndex:932; appliedIndex:932; }","duration":"517.5564ms","start":"2022-10-31T16:58:42.042Z","end":"2022-10-31T16:58:42.559Z","steps":["trace[2028611927] 'read index received' (duration: 517.5274ms)","trace[2028611927] 'applied index is now lower than readState.Index' (duration: 25.5µs)"],"step_count":2}
{"level":"warn","ts":"2022-10-31T16:58:42.734Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"692.3804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-10-31T16:58:42.734Z","caller":"traceutil/trace.go:171","msg":"trace[626623627] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:844; }","duration":"692.5813ms","start":"2022-10-31T16:58:42.041Z","end":"2022-10-31T16:58:42.734Z","steps":["trace[626623627] 'agreement among raft nodes before linearized reading' (duration: 517.8682ms)","trace[626623627] 'range keys from in-memory index tree' (duration: 174.4878ms)"],"step_count":2}
{"level":"warn","ts":"2022-10-31T16:58:42.734Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"359.7235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:13523"}
{"level":"warn","ts":"2022-10-31T16:58:42.734Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-31T16:58:42.041Z","time spent":"692.9766ms","remote":"127.0.0.1:51378","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
{"level":"info","ts":"2022-10-31T16:58:42.734Z","caller":"traceutil/trace.go:171","msg":"trace[887865635] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:844; }","duration":"360.0259ms","start":"2022-10-31T16:58:42.374Z","end":"2022-10-31T16:58:42.734Z","steps":["trace[887865635] 'agreement among raft nodes before linearized reading' (duration: 185.1476ms)","trace[887865635] 'range keys from in-memory index tree' (duration: 174.4672ms)"],"step_count":2}
{"level":"warn","ts":"2022-10-31T16:58:42.734Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-31T16:58:42.374Z","time spent":"360.2502ms","remote":"127.0.0.1:51350","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":13547,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
{"level":"info","ts":"2022-10-31T17:05:25.377Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":936}
{"level":"info","ts":"2022-10-31T17:05:25.379Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":936,"took":"1.3795ms"}
{"level":"info","ts":"2022-10-31T17:10:25.393Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1147}
{"level":"info","ts":"2022-10-31T17:10:25.394Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1147,"took":"663.7µs"}
{"level":"info","ts":"2022-10-31T17:15:25.406Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1357}
{"level":"info","ts":"2022-10-31T17:15:25.408Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1357,"took":"692.3µs"}
{"level":"info","ts":"2022-10-31T17:20:25.425Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1567}
{"level":"info","ts":"2022-10-31T17:20:25.426Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1567,"took":"618.1µs"}
{"level":"info","ts":"2022-10-31T17:25:25.439Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1778}
{"level":"info","ts":"2022-10-31T17:25:25.440Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1778,"took":"600.9µs"}
{"level":"info","ts":"2022-10-31T17:30:25.454Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1988}
{"level":"info","ts":"2022-10-31T17:30:25.455Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1988,"took":"516.2µs"}
*
* ==> etcd [f15bd3911db8] <==
* {"level":"info","ts":"2022-10-31T16:55:05.749Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-10-31T16:55:05.749Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-10-31T16:55:05.749Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-10-31T16:55:07.242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
{"level":"info","ts":"2022-10-31T16:55:07.242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
{"level":"info","ts":"2022-10-31T16:55:07.242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
{"level":"info","ts":"2022-10-31T16:55:07.242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
{"level":"info","ts":"2022-10-31T16:55:07.242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
{"level":"info","ts":"2022-10-31T16:55:07.242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
{"level":"info","ts":"2022-10-31T16:55:07.242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
{"level":"info","ts":"2022-10-31T16:55:07.266Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-165140 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2022-10-31T16:55:07.266Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-10-31T16:55:07.266Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-10-31T16:55:07.328Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-10-31T16:55:07.328Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-10-31T16:55:07.329Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-10-31T16:55:07.330Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
{"level":"info","ts":"2022-10-31T16:55:08.635Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-10-31T16:55:08.635Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"functional-165140","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
WARNING: 2022/10/31 16:55:08 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/10/31 16:55:08 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-10-31T16:55:08.731Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2022-10-31T16:55:08.741Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-10-31T16:55:08.827Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-10-31T16:55:08.827Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"functional-165140","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
*
* ==> kernel <==
* 17:31:46 up 56 min, 0 users, load average: 0.54, 0.41, 0.64
Linux functional-165140 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kube-apiserver [802191fe0134] <==
* {"level":"warn","ts":"2022-10-31T16:57:07.788Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001e9ce00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
E1031 16:57:07.789638 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
E1031 16:57:07.789734 1 writers.go:118] apiserver was unable to write a JSON response: http: Handler timeout
E1031 16:57:07.792038 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
E1031 16:57:07.793703 1 writers.go:131] apiserver was unable to write a fallback JSON response: http: Handler timeout
E1031 16:57:07.795579 1 timeout.go:141] post-timeout activity - time-elapsed: 6.22ms, GET "/api/v1/services" result: <nil>
I1031 16:58:14.696150 1 trace.go:205] Trace[125987983]: "List(recursive=true) etcd3" audit-id:50769e28-d47f-4f87-a77a-fc4f43582c75,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (31-Oct-2022 16:58:13.360) (total time: 1335ms):
Trace[125987983]: [1.3355717s] [1.3355717s] END
I1031 16:58:14.696919 1 trace.go:205] Trace[181574055]: "GuaranteedUpdate etcd3" audit-id:d0d2345d-9832-4c26-9db6-994b08c639ee,key:/leases/kube-node-lease/functional-165140,type:*coordination.Lease (31-Oct-2022 16:58:13.428) (total time: 1268ms):
Trace[181574055]: ---"Txn call finished" err:<nil> 1267ms (16:58:14.696)
Trace[181574055]: [1.2680037s] [1.2680037s] END
I1031 16:58:14.696978 1 trace.go:205] Trace[1053194155]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:50769e28-d47f-4f87-a77a-fc4f43582c75,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (31-Oct-2022 16:58:13.360) (total time: 1336ms):
Trace[1053194155]: ---"Listing from storage done" 1335ms (16:58:14.696)
Trace[1053194155]: [1.3364404s] [1.3364404s] END
I1031 16:58:14.697193 1 trace.go:205] Trace[1464377640]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-165140,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:d0d2345d-9832-4c26-9db6-994b08c639ee,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (31-Oct-2022 16:58:13.428) (total time: 1268ms):
Trace[1464377640]: ---"Write to database call finished" len:502,err:<nil> 1268ms (16:58:14.697)
Trace[1464377640]: [1.268597s] [1.268597s] END
I1031 16:58:14.703022 1 trace.go:205] Trace[1298145067]: "List(recursive=true) etcd3" audit-id:93c02df0-646e-49d7-ba52-03b4a8a9a8c8,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (31-Oct-2022 16:58:13.534) (total time: 1168ms):
Trace[1298145067]: [1.1683152s] [1.1683152s] END
I1031 16:58:14.703960 1 trace.go:205] Trace[1081836801]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:434b04a4-2e53-49f0-904d-5761ed7543e2,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (31-Oct-2022 16:58:13.730) (total time: 972ms):
Trace[1081836801]: ---"About to write a response" 972ms (16:58:14.703)
Trace[1081836801]: [972.7435ms] [972.7435ms] END
I1031 16:58:14.704662 1 trace.go:205] Trace[126183463]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:93c02df0-646e-49d7-ba52-03b4a8a9a8c8,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (31-Oct-2022 16:58:13.534) (total time: 1169ms):
Trace[126183463]: ---"Listing from storage done" 1168ms (16:58:14.703)
Trace[126183463]: [1.1699866s] [1.1699866s] END
*
* ==> kube-controller-manager [7e49b5991a82] <==
* I1031 16:55:07.371948 1 serving.go:348] Generated self-signed cert in-memory
*
* ==> kube-controller-manager [c5de9a758316] <==
* I1031 16:55:46.227071 1 shared_informer.go:262] Caches are synced for resource quota
I1031 16:55:46.227316 1 shared_informer.go:262] Caches are synced for endpoint_slice
I1031 16:55:46.227692 1 shared_informer.go:262] Caches are synced for expand
I1031 16:55:46.227707 1 shared_informer.go:262] Caches are synced for ReplicationController
I1031 16:55:46.227739 1 shared_informer.go:262] Caches are synced for ReplicaSet
I1031 16:55:46.227760 1 shared_informer.go:262] Caches are synced for endpoint
I1031 16:55:46.228106 1 shared_informer.go:262] Caches are synced for HPA
I1031 16:55:46.228108 1 shared_informer.go:262] Caches are synced for disruption
I1031 16:55:46.228151 1 shared_informer.go:262] Caches are synced for daemon sets
I1031 16:55:46.228133 1 shared_informer.go:262] Caches are synced for crt configmap
I1031 16:55:46.228153 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
I1031 16:55:46.228447 1 shared_informer.go:262] Caches are synced for PV protection
I1031 16:55:46.228984 1 shared_informer.go:262] Caches are synced for resource quota
I1031 16:55:46.235925 1 shared_informer.go:255] Waiting for caches to sync for garbage collector
I1031 16:55:46.236501 1 shared_informer.go:262] Caches are synced for attach detach
I1031 16:55:46.637360 1 shared_informer.go:262] Caches are synced for garbage collector
I1031 16:55:46.645869 1 shared_informer.go:262] Caches are synced for garbage collector
I1031 16:55:46.645970 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1031 16:55:54.293215 1 event.go:294] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-6458c8fb6f to 1"
I1031 16:55:54.332184 1 event.go:294] "Event occurred" object="default/hello-node-connect-6458c8fb6f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-6458c8fb6f-nn4bs"
I1031 16:55:55.449564 1 event.go:294] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-5fcdfb5cc4 to 1"
I1031 16:55:55.471296 1 event.go:294] "Event occurred" object="default/hello-node-5fcdfb5cc4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-5fcdfb5cc4-xm4pw"
I1031 16:57:07.351703 1 event.go:294] "Event occurred" object="default/mysql" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-596b7fcdbf to 1"
I1031 16:57:07.428101 1 event.go:294] "Event occurred" object="default/mysql-596b7fcdbf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-596b7fcdbf-lghmx"
I1031 16:57:13.631442 1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
*
* ==> kube-proxy [50fd339ceebc] <==
* I1031 16:55:32.030846 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I1031 16:55:32.034815 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I1031 16:55:32.132511 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I1031 16:55:32.136398 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I1031 16:55:32.143116 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
I1031 16:55:32.339034 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I1031 16:55:32.339259 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I1031 16:55:32.339442 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I1031 16:55:32.631429 1 server_others.go:206] "Using iptables Proxier"
I1031 16:55:32.631649 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I1031 16:55:32.631674 1 server_others.go:214] "Creating dualStackProxier for iptables"
I1031 16:55:32.631700 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I1031 16:55:32.631747 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1031 16:55:32.632071 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1031 16:55:32.632791 1 server.go:661] "Version info" version="v1.25.3"
I1031 16:55:32.632806 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1031 16:55:32.633884 1 config.go:317] "Starting service config controller"
I1031 16:55:32.634011 1 shared_informer.go:255] Waiting for caches to sync for service config
I1031 16:55:32.634062 1 config.go:444] "Starting node config controller"
I1031 16:55:32.634076 1 shared_informer.go:255] Waiting for caches to sync for node config
I1031 16:55:32.634203 1 config.go:226] "Starting endpoint slice config controller"
I1031 16:55:32.634236 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I1031 16:55:32.734180 1 shared_informer.go:262] Caches are synced for service config
I1031 16:55:32.734482 1 shared_informer.go:262] Caches are synced for node config
I1031 16:55:32.738927 1 shared_informer.go:262] Caches are synced for endpoint slice config
*
* ==> kube-proxy [e228fbac0515] <==
* E1031 16:55:04.432853 1 proxier.go:656] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
I1031 16:55:04.436933 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I1031 16:55:04.445040 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I1031 16:55:04.528450 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I1031 16:55:04.532526 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I1031 16:55:04.535910 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
E1031 16:55:04.544154 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-165140": dial tcp 192.168.49.2:8441: connect: connection refused
E1031 16:55:05.640522 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-165140": dial tcp 192.168.49.2:8441: connect: connection refused
E1031 16:55:07.862224 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-165140": dial tcp 192.168.49.2:8441: connect: connection refused
*
* ==> kube-scheduler [4ac99dd6d1c3] <==
* I1031 16:55:24.634520 1 serving.go:348] Generated self-signed cert in-memory
I1031 16:55:29.661189 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I1031 16:55:29.661306 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1031 16:55:29.760346 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I1031 16:55:29.760457 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I1031 16:55:29.760573 1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
I1031 16:55:29.760657 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1031 16:55:29.760686 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1031 16:55:29.760757 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1031 16:55:29.760792 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1031 16:55:29.760661 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1031 16:55:29.927988 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1031 16:55:29.928070 1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
I1031 16:55:29.928114 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [8aa1a70696b7] <==
* E1031 16:55:08.234563 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E1031 16:55:08.234692 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W1031 16:55:08.234783 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W1031 16:55:08.234673 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E1031 16:55:08.234874 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E1031 16:55:08.234908 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W1031 16:55:08.234568 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W1031 16:55:08.234889 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E1031 16:55:08.234958 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E1031 16:55:08.234968 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W1031 16:55:08.234717 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E1031 16:55:08.235006 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W1031 16:55:08.235290 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E1031 16:55:08.235333 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W1031 16:55:08.235381 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E1031 16:55:08.235474 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W1031 16:55:08.235654 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E1031 16:55:08.235853 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W1031 16:55:08.235469 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E1031 16:55:08.235972 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
I1031 16:55:08.635123 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E1031 16:55:08.635552 1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1031 16:55:08.635585 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1031 16:55:08.635836 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
E1031 16:55:08.635856 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Logs begin at Mon 2022-10-31 16:52:18 UTC, end at Mon 2022-10-31 17:31:47 UTC. --
Oct 31 16:57:14 functional-165140 kubelet[10117]: I1031 16:57:14.335089 10117 topology_manager.go:205] "Topology Admit Handler"
Oct 31 16:57:14 functional-165140 kubelet[10117]: I1031 16:57:14.435092 10117 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vd8q\" (UniqueName: \"kubernetes.io/projected/893cda7e-8919-4c43-8bb6-435151d78d5d-kube-api-access-2vd8q\") pod \"sp-pod\" (UID: \"893cda7e-8919-4c43-8bb6-435151d78d5d\") " pod="default/sp-pod"
Oct 31 16:57:14 functional-165140 kubelet[10117]: I1031 16:57:14.435357 10117 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3509ad4b-f10e-4b2a-8ff3-1b37216395c8\" (UniqueName: \"kubernetes.io/host-path/893cda7e-8919-4c43-8bb6-435151d78d5d-pvc-3509ad4b-f10e-4b2a-8ff3-1b37216395c8\") pod \"sp-pod\" (UID: \"893cda7e-8919-4c43-8bb6-435151d78d5d\") " pod="default/sp-pod"
Oct 31 16:57:16 functional-165140 kubelet[10117]: I1031 16:57:16.474262 10117 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="dc1926d67f88705eeebd2feecf0dd0e0646b79791e279b5a3d0871d220316709"
Oct 31 16:58:51 functional-165140 kubelet[10117]: I1031 16:58:51.251781 10117 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/893cda7e-8919-4c43-8bb6-435151d78d5d-pvc-3509ad4b-f10e-4b2a-8ff3-1b37216395c8" (OuterVolumeSpecName: "mypd") pod "893cda7e-8919-4c43-8bb6-435151d78d5d" (UID: "893cda7e-8919-4c43-8bb6-435151d78d5d"). InnerVolumeSpecName "pvc-3509ad4b-f10e-4b2a-8ff3-1b37216395c8". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct 31 16:58:51 functional-165140 kubelet[10117]: I1031 16:58:51.251909 10117 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"mypd\" (UniqueName: \"kubernetes.io/host-path/893cda7e-8919-4c43-8bb6-435151d78d5d-pvc-3509ad4b-f10e-4b2a-8ff3-1b37216395c8\") pod \"893cda7e-8919-4c43-8bb6-435151d78d5d\" (UID: \"893cda7e-8919-4c43-8bb6-435151d78d5d\") "
Oct 31 16:58:51 functional-165140 kubelet[10117]: I1031 16:58:51.252113 10117 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vd8q\" (UniqueName: \"kubernetes.io/projected/893cda7e-8919-4c43-8bb6-435151d78d5d-kube-api-access-2vd8q\") pod \"893cda7e-8919-4c43-8bb6-435151d78d5d\" (UID: \"893cda7e-8919-4c43-8bb6-435151d78d5d\") "
Oct 31 16:58:51 functional-165140 kubelet[10117]: I1031 16:58:51.252212 10117 reconciler.go:399] "Volume detached for volume \"pvc-3509ad4b-f10e-4b2a-8ff3-1b37216395c8\" (UniqueName: \"kubernetes.io/host-path/893cda7e-8919-4c43-8bb6-435151d78d5d-pvc-3509ad4b-f10e-4b2a-8ff3-1b37216395c8\") on node \"functional-165140\" DevicePath \"\""
Oct 31 16:58:51 functional-165140 kubelet[10117]: I1031 16:58:51.327590 10117 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/893cda7e-8919-4c43-8bb6-435151d78d5d-kube-api-access-2vd8q" (OuterVolumeSpecName: "kube-api-access-2vd8q") pod "893cda7e-8919-4c43-8bb6-435151d78d5d" (UID: "893cda7e-8919-4c43-8bb6-435151d78d5d"). InnerVolumeSpecName "kube-api-access-2vd8q". PluginName "kubernetes.io/projected", VolumeGidValue ""
Oct 31 16:58:51 functional-165140 kubelet[10117]: I1031 16:58:51.353178 10117 reconciler.go:399] "Volume detached for volume \"kube-api-access-2vd8q\" (UniqueName: \"kubernetes.io/projected/893cda7e-8919-4c43-8bb6-435151d78d5d-kube-api-access-2vd8q\") on node \"functional-165140\" DevicePath \"\""
Oct 31 16:58:52 functional-165140 kubelet[10117]: I1031 16:58:52.087355 10117 scope.go:115] "RemoveContainer" containerID="a3f3a738879409b0aeb4a3b1008f71852174e491107d52fc4345bbfb1f2d70e0"
Oct 31 16:58:52 functional-165140 kubelet[10117]: I1031 16:58:52.598711 10117 topology_manager.go:205] "Topology Admit Handler"
Oct 31 16:58:52 functional-165140 kubelet[10117]: E1031 16:58:52.599039 10117 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="893cda7e-8919-4c43-8bb6-435151d78d5d" containerName="myfrontend"
Oct 31 16:58:52 functional-165140 kubelet[10117]: I1031 16:58:52.599153 10117 memory_manager.go:345] "RemoveStaleState removing state" podUID="893cda7e-8919-4c43-8bb6-435151d78d5d" containerName="myfrontend"
Oct 31 16:58:52 functional-165140 kubelet[10117]: I1031 16:58:52.765551 10117 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3509ad4b-f10e-4b2a-8ff3-1b37216395c8\" (UniqueName: \"kubernetes.io/host-path/2ac1131f-300e-4f15-8376-1c4bacc0ad03-pvc-3509ad4b-f10e-4b2a-8ff3-1b37216395c8\") pod \"sp-pod\" (UID: \"2ac1131f-300e-4f15-8376-1c4bacc0ad03\") " pod="default/sp-pod"
Oct 31 16:58:52 functional-165140 kubelet[10117]: I1031 16:58:52.765711 10117 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptsnp\" (UniqueName: \"kubernetes.io/projected/2ac1131f-300e-4f15-8376-1c4bacc0ad03-kube-api-access-ptsnp\") pod \"sp-pod\" (UID: \"2ac1131f-300e-4f15-8376-1c4bacc0ad03\") " pod="default/sp-pod"
Oct 31 16:58:52 functional-165140 kubelet[10117]: I1031 16:58:52.955331 10117 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=893cda7e-8919-4c43-8bb6-435151d78d5d path="/var/lib/kubelet/pods/893cda7e-8919-4c43-8bb6-435151d78d5d/volumes"
Oct 31 16:58:53 functional-165140 kubelet[10117]: I1031 16:58:53.697475 10117 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="25e3e989ac9e9aae5dce4bdae8ab6764d8e8eeed089185ebaa90c988d2854baf"
Oct 31 17:00:21 functional-165140 kubelet[10117]: W1031 17:00:21.156445 10117 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 31 17:05:21 functional-165140 kubelet[10117]: W1031 17:05:21.154950 10117 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 31 17:10:21 functional-165140 kubelet[10117]: W1031 17:10:21.155371 10117 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 31 17:15:21 functional-165140 kubelet[10117]: W1031 17:15:21.156532 10117 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 31 17:20:21 functional-165140 kubelet[10117]: W1031 17:20:21.158226 10117 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 31 17:25:21 functional-165140 kubelet[10117]: W1031 17:25:21.156727 10117 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 31 17:30:21 functional-165140 kubelet[10117]: W1031 17:30:21.159369 10117 sysinfo.go:203] Nodes topology is not available, providing CPU topology
*
* ==> storage-provisioner [314966124e12] <==
* I1031 16:55:30.733977 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1031 16:55:30.846469 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1031 16:55:30.846619 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1031 16:55:48.566041 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1031 16:55:48.566449 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-165140_274b84b4-215d-4e39-9b31-a85caa2cee58!
I1031 16:55:48.566450 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"71a3e53b-8983-4099-96e1-c22bba77e68d", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-165140_274b84b4-215d-4e39-9b31-a85caa2cee58 became leader
I1031 16:55:48.668076 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-165140_274b84b4-215d-4e39-9b31-a85caa2cee58!
I1031 16:57:13.630978 1 controller.go:1332] provision "default/myclaim" class "standard": started
I1031 16:57:13.631902 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3509ad4b-f10e-4b2a-8ff3-1b37216395c8", APIVersion:"v1", ResourceVersion:"760", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I1031 16:57:13.631265 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard 9e7632a4-2b22-457a-a5ff-92969943bebf 370 0 2022-10-31 16:53:14 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-10-31 16:53:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-3509ad4b-f10e-4b2a-8ff3-1b37216395c8 &PersistentVolumeClaim{ObjectMeta:{myclaim default 3509ad4b-f10e-4b2a-8ff3-1b37216395c8 760 0 2022-10-31 16:57:13 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2022-10-31 16:57:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2022-10-31 16:57:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I1031 16:57:13.633677 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-3509ad4b-f10e-4b2a-8ff3-1b37216395c8" provisioned
I1031 16:57:13.633975 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I1031 16:57:13.633995 1 volume_store.go:212] Trying to save persistentvolume "pvc-3509ad4b-f10e-4b2a-8ff3-1b37216395c8"
I1031 16:57:13.660326 1 volume_store.go:219] persistentvolume "pvc-3509ad4b-f10e-4b2a-8ff3-1b37216395c8" saved
I1031 16:57:13.660682 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3509ad4b-f10e-4b2a-8ff3-1b37216395c8", APIVersion:"v1", ResourceVersion:"760", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-3509ad4b-f10e-4b2a-8ff3-1b37216395c8
*
* ==> storage-provisioner [38b4442cb100] <==
* I1031 16:54:20.323133 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1031 16:54:20.339934 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1031 16:54:20.340094 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1031 16:54:37.764253 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1031 16:54:37.764520 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-165140_052323d7-6d2a-43a6-a76a-98631f55d156!
I1031 16:54:37.765142 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"71a3e53b-8983-4099-96e1-c22bba77e68d", APIVersion:"v1", ResourceVersion:"521", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-165140_052323d7-6d2a-43a6-a76a-98631f55d156 became leader
I1031 16:54:37.865255 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-165140_052323d7-6d2a-43a6-a76a-98631f55d156!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-165140 -n functional-165140
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-165140 -n functional-165140: (1.6073176s)
helpers_test.go:261: (dbg) Run: kubectl --context functional-165140 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context functional-165140 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-165140 describe pod : exit status 1 (184.9565ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context functional-165140 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (2154.48s)