=== RUN TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd
=== CONT TestFunctional/parallel/ServiceCmd
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run: kubectl --context functional-185027 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run: kubectl --context functional-185027 expose deployment hello-node --type=NodePort --port=8080
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
=== CONT TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-sdlww" [d44c8121-4c89-48cb-bc37-e9b856c65887] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
=== CONT TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-sdlww" [d44c8121-4c89-48cb-bc37-e9b856c65887] Running
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 31.1031762s
functional_test.go:1449: (dbg) Run: out/minikube-windows-amd64.exe -p functional-185027 service list
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1449: (dbg) Done: out/minikube-windows-amd64.exe -p functional-185027 service list: (1.8275116s)
functional_test.go:1463: (dbg) Run: out/minikube-windows-amd64.exe -p functional-185027 service --namespace=default --https --url hello-node
functional_test.go:1392: Failed to sent interrupt to proc not supported by windows
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-185027 service --namespace=default --https --url hello-node: exit status 1 (34m28.0281965s)
-- stdout --
https://127.0.0.1:60885
-- /stdout --
** stderr **
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
** /stderr **
functional_test.go:1465: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-185027 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1402: service test failed - dumping debug information
functional_test.go:1403: -----------------------service failure post-mortem--------------------------------
functional_test.go:1406: (dbg) Run: kubectl --context functional-185027 describe po hello-node
functional_test.go:1410: hello-node pod describe:
Name: hello-node-5fcdfb5cc4-sdlww
Namespace: default
Priority: 0
Node: functional-185027/192.168.49.2
Start Time: Mon, 31 Oct 2022 18:55:26 +0000
Labels: app=hello-node
pod-template-hash=5fcdfb5cc4
Annotations: <none>
Status: Running
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
Controlled By: ReplicaSet/hello-node-5fcdfb5cc4
Containers:
echoserver:
Container ID: docker://6ff40a0d5c1fbc0f4e9817f43eef3bf115f8a65d08675ea416ee4c4c37ba7676
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 31 Oct 2022 18:55:50 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ksrdx (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-ksrdx:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-5fcdfb5cc4-sdlww to functional-185027
Normal Pulling 35m kubelet, functional-185027 Pulling image "k8s.gcr.io/echoserver:1.8"
Normal Pulled 34m kubelet, functional-185027 Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 22.0810848s
Normal Created 34m kubelet, functional-185027 Created container echoserver
Normal Started 34m kubelet, functional-185027 Started container echoserver
Name: hello-node-connect-6458c8fb6f-bntr4
Namespace: default
Priority: 0
Node: functional-185027/192.168.49.2
Start Time: Mon, 31 Oct 2022 18:57:08 +0000
Labels: app=hello-node-connect
pod-template-hash=6458c8fb6f
Annotations: <none>
Status: Running
IP: 172.17.0.6
IPs:
IP: 172.17.0.6
Controlled By: ReplicaSet/hello-node-connect-6458c8fb6f
Containers:
echoserver:
Container ID: docker://16088f6d54a40222d48af47b99ee74e38cccb870885f376d8e3f516466f482d2
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 31 Oct 2022 18:57:10 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n7s2b (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-n7s2b:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-connect-6458c8fb6f-bntr4 to functional-185027
Normal Pulled 33m kubelet, functional-185027 Container image "k8s.gcr.io/echoserver:1.8" already present on machine
Normal Created 33m kubelet, functional-185027 Created container echoserver
Normal Started 33m kubelet, functional-185027 Started container echoserver
functional_test.go:1412: (dbg) Run: kubectl --context functional-185027 logs -l app=hello-node
functional_test.go:1416: hello-node logs:
functional_test.go:1418: (dbg) Run: kubectl --context functional-185027 describe svc hello-node
functional_test.go:1422: hello-node svc describe:
Name: hello-node
Namespace: default
Labels: app=hello-node
Annotations: <none>
Selector: app=hello-node
Type: NodePort
IP: 10.107.136.235
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31554/TCP
Endpoints: 172.17.0.3:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect functional-185027
helpers_test.go:235: (dbg) docker inspect functional-185027:
-- stdout --
[
{
"Id": "776ee30f1898e0d4ed3d9ea8c0fe5b8a598ce4fa417139286a1bcbad55d4aa33",
"Created": "2022-10-31T18:51:05.127907Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 27558,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-10-31T18:51:06.0826017Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
"ResolvConfPath": "/var/lib/docker/containers/776ee30f1898e0d4ed3d9ea8c0fe5b8a598ce4fa417139286a1bcbad55d4aa33/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/776ee30f1898e0d4ed3d9ea8c0fe5b8a598ce4fa417139286a1bcbad55d4aa33/hostname",
"HostsPath": "/var/lib/docker/containers/776ee30f1898e0d4ed3d9ea8c0fe5b8a598ce4fa417139286a1bcbad55d4aa33/hosts",
"LogPath": "/var/lib/docker/containers/776ee30f1898e0d4ed3d9ea8c0fe5b8a598ce4fa417139286a1bcbad55d4aa33/776ee30f1898e0d4ed3d9ea8c0fe5b8a598ce4fa417139286a1bcbad55d4aa33-json.log",
"Name": "/functional-185027",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-185027:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-185027",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4194304000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/586e0e88e008a8048c5f799fb26d2f2ab063d171cddfcc320c98e9011e4a6232-init/diff:/var/lib/docker/overlay2/64fa48bb8c237bfe415e86ac6352e696fedf67e9d3f1e46e1e5e1f82f6ba5395/diff:/var/lib/docker/overlay2/0246811eed463a0b60a5b87f716df4a6e21dc83d26e9f043b506e92a72aea51d/diff:/var/lib/docker/overlay2/9fb0287da46d009fb9ccbbb300b61ae1c30b1c8fa8ef7fed21cf2fe2e8d9122a/diff:/var/lib/docker/overlay2/970b4da8b289a3925340d8436c538124ee09ea4ec66fbcfc4e91fab3a7cec722/diff:/var/lib/docker/overlay2/5888e330d11b4fdb3c4b3a89b6dc30e08f9793b06ae730e525232ef8dcd05a87/diff:/var/lib/docker/overlay2/3137bfd3e210f64e2673ae9596abe58332afa7bd302b9a3c32be2daaca15ccbf/diff:/var/lib/docker/overlay2/75c49a29e9e9ab68a12fddf655ccf41eb9cac7a1e7522f87978d548225c9c1c1/diff:/var/lib/docker/overlay2/1fff9bd824f58b98178b35380f7c0b8f983606c655412a906d26ebde3875d9eb/diff:/var/lib/docker/overlay2/100b37bee5e305cd233e4a03f2cabecf02bc59e36d2124fdba34ef174e9eb60b/diff:/var/lib/docker/overlay2/d48b29
9ea9bbe756af9d7b3fd11e9a6a006fd2d196b4814447a322c4c749b42b/diff:/var/lib/docker/overlay2/5aca5eab6ac373af46b4b96816c4dcb3641cc3ce0e97b15f67c5671d82bf1904/diff:/var/lib/docker/overlay2/89ee3e6b4b018997cec9d635a7ffddb570468addb33d4fd6e1d17ef8bdcacf98/diff:/var/lib/docker/overlay2/b161b7628d34e6d5cd57a6ccd7742c01c2a1bd6714cee7644b4e7a25b04a72d4/diff:/var/lib/docker/overlay2/2b1113fbc0db5154dc86a90b7a0c9f9b6ba80261fdfc7b3a186057a7a1dbc165/diff:/var/lib/docker/overlay2/07db2f468db4b0487d18108733d1c9365dc6f7c235e998ed8a895fce9b566f22/diff:/var/lib/docker/overlay2/8d1347240f3613edfc56cc1cb6f580737221830aa669113e9d7238bae376cff0/diff:/var/lib/docker/overlay2/1f54b8274a8eaf3584b47e0b17dfbc0089e71dd01eab9b6e3d8876c7e770b192/diff:/var/lib/docker/overlay2/026cf0b7cdceea5bc6371cf5cae2ae48e3e6b735b9ed235154acba0cf5278a6b/diff:/var/lib/docker/overlay2/88146ef26a660051a660aab3ca3476b2ca6e2f93f79b8d95df716ba76ff33830/diff:/var/lib/docker/overlay2/1c42011fe8a73d83c7bc5d8525bcc31e22c121a4bde5dd4ebf3f37fb99a1b736/diff:/var/lib/d
ocker/overlay2/bc7f29a539a015938110c4c9a71142bbde623a90f76b0404d91fbe4905541772/diff:/var/lib/docker/overlay2/ae92469fe3cd4daea2642b7b874b7b4a8358eb671f6c503c137471ee743e16f0/diff:/var/lib/docker/overlay2/44e253e12d80f42d3656f42915bf7313a0739a9ce883e8d3f255832ea6bea939/diff:/var/lib/docker/overlay2/cb40ea411bee1455d3fe042fedd4c81db12362fcb9b09f7467f619671c6a261f/diff:/var/lib/docker/overlay2/4f3e5fc28e7d877350f96a68cc0f56d4d6df35aca4a19e74ebfa8475fe9298bc/diff:/var/lib/docker/overlay2/a509b1220830b125150cb76868d375315c5c88cae8bd95d16b71e69b402031a3/diff:/var/lib/docker/overlay2/c4036d45e16311605530f123a88e1a183e2283ac08b8086215877d6a7c77c7b7/diff:/var/lib/docker/overlay2/12925b396685015d594b70fea14ead1b3dc32b324413c181e3c594b302178ff9/diff:/var/lib/docker/overlay2/751df4830d2471b879d368637873b16d6dc13ec61ec19b65848a6ee4ba46aa92/diff:/var/lib/docker/overlay2/a3739f3ba8e81b2eb78262bfecb59d801884705f90327dfc586b0593c0d778a7/diff:/var/lib/docker/overlay2/a4df73104a9f0a0c40426295692478c40175894a30ae045da256da5844e
b4b71/diff:/var/lib/docker/overlay2/69d71a2bcc079517e1d986f4009082d98c3e1c9113626fc66f9cb79c9f84ebd8/diff:/var/lib/docker/overlay2/656667c2a92d49653076a8565b314ddfb9e4ce0e966ec7d91e2bdea5783c88f4/diff:/var/lib/docker/overlay2/9dfc69f0b36841e53d46bc0051357568b9ae52abca56d118b65244edcac06924/diff:/var/lib/docker/overlay2/58a4b7f5d566e180f06cad03f9af13af591bedb03180a0cd6d99aa320ebbd1da/diff:/var/lib/docker/overlay2/1cdb123ce77cb973eeca0edb3152f6ed5cce67bad0e069879f43d4caa9c95162/diff:/var/lib/docker/overlay2/020def275eda7de45ddcb77ebd7bcbeffb1c19619f6e8f80946a7dc0d3d33bc3/diff:/var/lib/docker/overlay2/3b393c4fa4219cc6b08c30510f3cf433253b39a1be67b56835f8006a549306b3/diff:/var/lib/docker/overlay2/e803618b3cea561d0012039af9d6d591538e98fb514d2c8275a7e091a32aceae/diff:/var/lib/docker/overlay2/431a92a07380551d8afc84f9e83ba6d026249c8cbcc5bfc7e69bc8aab29649e7/diff:/var/lib/docker/overlay2/7a65eede3d34ca34dda0f2a2550f8ceaa00b8db9d72c8262c8b6d83594e444fe/diff:/var/lib/docker/overlay2/cc7d22683d933905ba78fdd9fef014e935d154
642ea9f9301ee6e0ac45ab0e37/diff",
"MergedDir": "/var/lib/docker/overlay2/586e0e88e008a8048c5f799fb26d2f2ab063d171cddfcc320c98e9011e4a6232/merged",
"UpperDir": "/var/lib/docker/overlay2/586e0e88e008a8048c5f799fb26d2f2ab063d171cddfcc320c98e9011e4a6232/diff",
"WorkDir": "/var/lib/docker/overlay2/586e0e88e008a8048c5f799fb26d2f2ab063d171cddfcc320c98e9011e4a6232/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-185027",
"Source": "/var/lib/docker/volumes/functional-185027/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-185027",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-185027",
"name.minikube.sigs.k8s.io": "functional-185027",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "4c3cf0bcfb8b4ec07d971b8e24d06cbf18c709351361aea4a97a076cbe6e68fc",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "60579"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "60580"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "60581"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "60582"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "60583"
}
]
},
"SandboxKey": "/var/run/docker/netns/4c3cf0bcfb8b",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-185027": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"776ee30f1898",
"functional-185027"
],
"NetworkID": "20915f39ccc703c4a27994c607b8e8e314d77bf75a273fca6fc8b2fb950c3175",
"EndpointID": "47d81958de9f11a4cc022a00e1d73d5671f6ffaf0e7891a6820ea38cbf1ff3e8",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-185027 -n functional-185027
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-185027 -n functional-185027: (1.9890349s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-windows-amd64.exe -p functional-185027 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-185027 logs -n 25: (3.7688979s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs:
-- stdout --
*
* ==> Audit <==
* |----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
| image | functional-185027 image save | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:56 GMT | 31 Oct 22 18:56 GMT |
| | gcr.io/google-containers/addon-resizer:functional-185027 | | | | | |
| | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar | | | | | |
| ssh | functional-185027 ssh sudo cat | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:56 GMT | 31 Oct 22 18:56 GMT |
| | /etc/ssl/certs/8160.pem | | | | | |
| ssh | functional-185027 ssh sudo cat | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:56 GMT | 31 Oct 22 18:56 GMT |
| | /usr/share/ca-certificates/8160.pem | | | | | |
| ssh | functional-185027 ssh sudo cat | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:56 GMT | 31 Oct 22 18:56 GMT |
| | /etc/ssl/certs/51391683.0 | | | | | |
| image | functional-185027 image rm | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:56 GMT | 31 Oct 22 18:56 GMT |
| | gcr.io/google-containers/addon-resizer:functional-185027 | | | | | |
| ssh | functional-185027 ssh sudo cat | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:56 GMT | 31 Oct 22 18:56 GMT |
| | /etc/ssl/certs/81602.pem | | | | | |
| image | functional-185027 image ls | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:56 GMT | 31 Oct 22 18:56 GMT |
| ssh | functional-185027 ssh sudo cat | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:56 GMT | 31 Oct 22 18:56 GMT |
| | /usr/share/ca-certificates/81602.pem | | | | | |
| image | functional-185027 image load | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:56 GMT | 31 Oct 22 18:57 GMT |
| | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar | | | | | |
| ssh | functional-185027 ssh sudo cat | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:56 GMT | 31 Oct 22 18:56 GMT |
| | /etc/ssl/certs/3ec20f2e.0 | | | | | |
| docker-env | functional-185027 docker-env | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:56 GMT | 31 Oct 22 18:57 GMT |
| image | functional-185027 image ls | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:57 GMT | 31 Oct 22 18:57 GMT |
| image | functional-185027 image save --daemon | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:57 GMT | 31 Oct 22 18:57 GMT |
| | gcr.io/google-containers/addon-resizer:functional-185027 | | | | | |
| docker-env | functional-185027 docker-env | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:57 GMT | 31 Oct 22 18:57 GMT |
| ssh | functional-185027 ssh sudo cat | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:57 GMT | 31 Oct 22 18:57 GMT |
| | /etc/test/nested/copy/8160/hosts | | | | | |
| update-context | functional-185027 | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:57 GMT | 31 Oct 22 18:57 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-185027 | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:57 GMT | 31 Oct 22 18:57 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-185027 | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:57 GMT | 31 Oct 22 18:57 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| image | functional-185027 image ls | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:57 GMT | 31 Oct 22 18:57 GMT |
| | --format short | | | | | |
| image | functional-185027 image ls | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:57 GMT | 31 Oct 22 18:57 GMT |
| | --format yaml | | | | | |
| ssh | functional-185027 ssh pgrep | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:57 GMT | |
| | buildkitd | | | | | |
| image | functional-185027 image build -t | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:57 GMT | 31 Oct 22 18:57 GMT |
| | localhost/my-image:functional-185027 | | | | | |
| | testdata\build | | | | | |
| image | functional-185027 image ls | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:57 GMT | 31 Oct 22 18:57 GMT |
| image | functional-185027 image ls | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:57 GMT | 31 Oct 22 18:57 GMT |
| | --format json | | | | | |
| image | functional-185027 image ls | functional-185027 | minikube2\jenkins | v1.27.1 | 31 Oct 22 18:57 GMT | 31 Oct 22 18:57 GMT |
| | --format table | | | | | |
|----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/10/31 18:55:35
Running on machine: minikube2
Binary: Built with gc go1.19.2 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1031 18:55:35.012402 4688 out.go:296] Setting OutFile to fd 640 ...
I1031 18:55:35.088400 4688 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 18:55:35.088400 4688 out.go:309] Setting ErrFile to fd 960...
I1031 18:55:35.088400 4688 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 18:55:35.116119 4688 out.go:303] Setting JSON to false
I1031 18:55:35.120932 4688 start.go:116] hostinfo: {"hostname":"minikube2","uptime":1527,"bootTime":1667241008,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
W1031 18:55:35.120932 4688 start.go:124] gopshost.Virtualization returned error: not implemented yet
I1031 18:55:35.127928 4688 out.go:177] * [functional-185027] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
I1031 18:55:35.135929 4688 notify.go:220] Checking for updates...
I1031 18:55:35.140922 4688 out.go:177] - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
I1031 18:55:35.145927 4688 out.go:177] - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
I1031 18:55:35.150928 4688 out.go:177] - MINIKUBE_LOCATION=15232
I1031 18:55:35.155919 4688 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1031 18:55:35.163920 4688 config.go:180] Loaded profile config "functional-185027": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1031 18:55:35.165923 4688 driver.go:365] Setting default libvirt URI to qemu:///system
I1031 18:55:35.552089 4688 docker.go:137] docker version: linux-20.10.20
I1031 18:55:35.562109 4688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1031 18:55:36.322711 4688 info.go:266] docker info: {ID:26MY:LROH:OGTG:WTJY:KCBY:XAYF:V6YO:CPRU:CUYX:XLZC:GUCW:MH4B Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-10-31 18:55:35.7467486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1031 18:55:36.339741 4688 out.go:177] * Using the docker driver based on existing profile
I1031 18:55:36.344716 4688 start.go:282] selected driver: docker
I1031 18:55:36.344716 4688 start.go:808] validating driver "docker" against &{Name:functional-185027 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-185027 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fals
e portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1031 18:55:36.344716 4688 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1031 18:55:36.371714 4688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1031 18:55:37.140055 4688 info.go:266] docker info: {ID:26MY:LROH:OGTG:WTJY:KCBY:XAYF:V6YO:CPRU:CUYX:XLZC:GUCW:MH4B Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:53 SystemTime:2022-10-31 18:55:36.5802561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1031 18:55:37.200061 4688 cni.go:95] Creating CNI manager for ""
I1031 18:55:37.200061 4688 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1031 18:55:37.200061 4688 start_flags.go:317] config:
{Name:functional-185027 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-185027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false sto
rage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1031 18:55:37.314147 4688 out.go:177] * dry-run validation complete!
*
* ==> Docker <==
* -- Logs begin at Mon 2022-10-31 18:51:06 UTC, end at Mon 2022-10-31 19:30:32 UTC. --
Oct 31 18:54:04 functional-185027 dockerd[8021]: time="2022-10-31T18:54:04.219273400Z" level=info msg="ignoring event" container=91254f0ddc8e2c81d8c2a18a71e24bdff57d96150509fe387af0f0af1010bfc8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:04 functional-185027 dockerd[8021]: time="2022-10-31T18:54:04.219397300Z" level=info msg="ignoring event" container=186e0553f4396f9397493d18e941d0820177a5539108191ec0d59a5cb202cc5f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:04 functional-185027 dockerd[8021]: time="2022-10-31T18:54:04.219506100Z" level=info msg="ignoring event" container=59f326b84c624ebd0854e015a3c9d2dc0fb44bf563a3794601cc7e404cb08bfe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:04 functional-185027 dockerd[8021]: time="2022-10-31T18:54:04.223623600Z" level=info msg="ignoring event" container=59b180e5e3012bcbd06b9463aac245fc17443452e39bca7f2415718506611ed3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:04 functional-185027 dockerd[8021]: time="2022-10-31T18:54:04.316744600Z" level=info msg="ignoring event" container=bb525690d1f21afa58dd476daf62accbfc7242f1886dd75f2fd031f1f3dc84cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:04 functional-185027 dockerd[8021]: time="2022-10-31T18:54:04.316933800Z" level=info msg="ignoring event" container=7963ee55b990e7fb8c8a62ef7ab7400731f22ab0399fe956a6742aadf76965b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:04 functional-185027 dockerd[8021]: time="2022-10-31T18:54:04.416329600Z" level=info msg="ignoring event" container=5a4847c4053aaf2daaf7f043a0b34c1dde36c1ada70f4cf511571e1bb9ef4232 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:04 functional-185027 dockerd[8021]: time="2022-10-31T18:54:04.416465800Z" level=info msg="ignoring event" container=4fdaf978321f5280d9b094c4836bb44e4fb08645fd3914f955150e10ee881269 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:04 functional-185027 dockerd[8021]: time="2022-10-31T18:54:04.418234000Z" level=info msg="ignoring event" container=32f90743c28f6feda5275bcf960d203908d5cfbdcfbae443536b9635a93a42fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:05 functional-185027 dockerd[8021]: time="2022-10-31T18:54:05.095055500Z" level=error msg="collecting stats for e87b09046059b43fe359ff05b4c0e53596c078952c7ebd85e7bbd6af756b403d: failed to retrieve the statistics for eth0 in netns /var/run/docker/netns/5773a56ff9d0: Link not found"
Oct 31 18:54:06 functional-185027 dockerd[8021]: time="2022-10-31T18:54:06.610050300Z" level=info msg="ignoring event" container=c75f9b5382372ea624a552a6e2b14a120aae25fd153682274006a4f8d29a692e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:06 functional-185027 dockerd[8021]: time="2022-10-31T18:54:06.910664800Z" level=info msg="ignoring event" container=4bd7ccaf2ab0b41c0f6f7252daed4b039edab66a3a75f4525486b77a3e281c5c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:13 functional-185027 dockerd[8021]: time="2022-10-31T18:54:13.819208900Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=e87b09046059b43fe359ff05b4c0e53596c078952c7ebd85e7bbd6af756b403d
Oct 31 18:54:13 functional-185027 dockerd[8021]: time="2022-10-31T18:54:13.932317000Z" level=info msg="ignoring event" container=e87b09046059b43fe359ff05b4c0e53596c078952c7ebd85e7bbd6af756b403d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:21 functional-185027 dockerd[8021]: time="2022-10-31T18:54:21.639304400Z" level=info msg="ignoring event" container=18bdc430b017cb4022afc7e01d8411f3f40eb1348657f56bdd584e5ec0a27cba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:23 functional-185027 dockerd[8021]: time="2022-10-31T18:54:23.039565500Z" level=info msg="ignoring event" container=df60dad66a6a84b6b111615a198bccb5fdc095746d0aa39f81346ae870b190a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:32 functional-185027 dockerd[8021]: time="2022-10-31T18:54:32.034082300Z" level=info msg="ignoring event" container=f9d270541477ede14ac5f264091b468a1b9dd3be3d634f8719168ceb32b0aed2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:32 functional-185027 dockerd[8021]: time="2022-10-31T18:54:32.223876300Z" level=info msg="ignoring event" container=b9da0fc25bc27e423fe599bde290ebaa720480ad0cb9fe92ba838de605cff37f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:32 functional-185027 dockerd[8021]: time="2022-10-31T18:54:32.640402300Z" level=info msg="ignoring event" container=e4146521b389bb689fd73a556d875917a7b0a8e4e3ee2d8c880944256d77ca01 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:41 functional-185027 dockerd[8021]: time="2022-10-31T18:54:41.494331900Z" level=info msg="ignoring event" container=eda7672410bd1a9d14b22d0a102fc0b48ba4ac21a5e97155822e002c256a3ad5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:54:46 functional-185027 dockerd[8021]: time="2022-10-31T18:54:46.611470400Z" level=info msg="ignoring event" container=c5a5b14d2885c663e12829653a75812bdd173cb22232fc489c2c07fa30d99357 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:56:36 functional-185027 dockerd[8021]: time="2022-10-31T18:56:36.896564500Z" level=info msg="ignoring event" container=5996073b51e5959c0c555abd9cd9899e5addf05ef8cfd42e323ff5837a6ecff6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:56:37 functional-185027 dockerd[8021]: time="2022-10-31T18:56:37.091135400Z" level=info msg="ignoring event" container=7bb2aa2f4d3bae0bf8e6b93521b09e59eec5092fdd55bc49fe0c398337109805 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:57:35 functional-185027 dockerd[8021]: time="2022-10-31T18:57:35.235936700Z" level=info msg="ignoring event" container=ed56b45c322395d0632c132c7dcaac6f5578aa5a809ca7c221cff12f70abc22e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:57:36 functional-185027 dockerd[8021]: time="2022-10-31T18:57:36.112502900Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
f483674f39785 mysql@sha256:f5e2d4d7dccdc3f2a1d592bd3f0eb472b2f72f9fb942a84ff5b5cc049fe63a04 32 minutes ago Running mysql 0 fcffcaaeb0a34
16088f6d54a40 82e4c8a736a4f 33 minutes ago Running echoserver 0 1407a2530d626
977303184ab21 nginx@sha256:943c25b4b66b332184d5ba6bb18234273551593016c0e0ae906bab111548239f 33 minutes ago Running myfrontend 0 8c6822edbc904
3d9a449bcc5bd nginx@sha256:2452715dd322b3273419652b7721b64aa60305f606ef7a674ae28b6f12d155a3 34 minutes ago Running nginx 0 9f90184d47ce4
6ff40a0d5c1fb k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 34 minutes ago Running echoserver 0 7ed7f802a7c54
f70568e3c865c 6e38f40d628db 35 minutes ago Running storage-provisioner 6 b61cf76f09851
f49fc6e914c01 6039992312758 35 minutes ago Running kube-controller-manager 4 b9e4860a47854
4508c6173fb7f beaaf00edd38a 35 minutes ago Running kube-proxy 3 91402179d3128
a02fb12a64eb4 5185b96f0becf 35 minutes ago Running coredns 4 15324a941d699
c5a5b14d2885c 6e38f40d628db 35 minutes ago Exited storage-provisioner 5 b61cf76f09851
51b1169513b95 0346dbd74bcb9 35 minutes ago Running kube-apiserver 2 e02515ab34e72
df60dad66a6a8 0346dbd74bcb9 36 minutes ago Exited kube-apiserver 1 e02515ab34e72
90823b98eee0b 6d23ec0e8b87e 36 minutes ago Running kube-scheduler 4 86366cfc95915
eda7672410bd1 6039992312758 36 minutes ago Exited kube-controller-manager 3 b9e4860a47854
0fb560da77721 a8a176a5d5d69 36 minutes ago Running etcd 3 813d6848d6d91
e87b09046059b 5185b96f0becf 36 minutes ago Exited coredns 3 186e0553f4396
32f90743c28f6 a8a176a5d5d69 36 minutes ago Exited etcd 2 4fdaf978321f5
7963ee55b990e beaaf00edd38a 36 minutes ago Exited kube-proxy 2 91254f0ddc8e2
4bd7ccaf2ab0b 6d23ec0e8b87e 36 minutes ago Exited kube-scheduler 3 59f326b84c624
*
* ==> coredns [a02fb12a64eb] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
*
* ==> coredns [e87b09046059] <==
* [INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] plugin/health: Going into lameduck mode for 5s
[ERROR] plugin/errors: 2 5635440496865854398.8839545735464941848. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
*
* ==> describe nodes <==
* Name: functional-185027
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-185027
kubernetes.io/os=linux
minikube.k8s.io/commit=1c73d673499e72567c9d9cb6c201ec071d452750
minikube.k8s.io/name=functional-185027
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_10_31T18_51_43_0700
minikube.k8s.io/version=v1.27.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 31 Oct 2022 18:51:38 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-185027
AcquireTime: <unset>
RenewTime: Mon, 31 Oct 2022 19:30:27 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 31 Oct 2022 19:29:07 +0000 Mon, 31 Oct 2022 18:51:38 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 31 Oct 2022 19:29:07 +0000 Mon, 31 Oct 2022 18:51:38 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 31 Oct 2022 19:29:07 +0000 Mon, 31 Oct 2022 18:51:38 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 31 Oct 2022 19:29:07 +0000 Mon, 31 Oct 2022 18:51:54 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-185027
Capacity:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
System Info:
Machine ID: 996614ec4c814b87b7ec8ebee3d0e8c9
System UUID: 996614ec4c814b87b7ec8ebee3d0e8c9
Boot ID: f9f5cc23-4551-43fa-ab1e-3a9543754a23
Kernel Version: 5.10.102.1-microsoft-standard-WSL2
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.20
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-5fcdfb5cc4-sdlww 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35m
default hello-node-connect-6458c8fb6f-bntr4 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 33m
default mysql-596b7fcdbf-hgd9q 600m (3%!)(MISSING) 700m (4%!)(MISSING) 512Mi (0%!)(MISSING) 700Mi (1%!)(MISSING) 33m
default nginx-svc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 34m
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 33m
kube-system coredns-565d847f94-shxzm 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 38m
kube-system etcd-functional-185027 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-apiserver-functional-185027 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 36m
kube-system kube-controller-manager-functional-185027 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-proxy-kx6pz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-scheduler-functional-185027 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (8%!)(MISSING) 700m (4%!)(MISSING)
memory 682Mi (1%!)(MISSING) 870Mi (1%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 38m kube-proxy
Normal Starting 35m kube-proxy
Normal Starting 37m kube-proxy
Normal NodeHasSufficientMemory 39m (x6 over 39m) kubelet Node functional-185027 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 39m (x6 over 39m) kubelet Node functional-185027 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 39m (x6 over 39m) kubelet Node functional-185027 status is now: NodeHasSufficientPID
Normal Starting 38m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 38m kubelet Node functional-185027 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 38m kubelet Node functional-185027 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 38m kubelet Node functional-185027 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 38m kubelet Updated Node Allocatable limit across pods
Normal NodeReady 38m kubelet Node functional-185027 status is now: NodeReady
Normal RegisteredNode 38m node-controller Node functional-185027 event: Registered Node functional-185027 in Controller
Normal RegisteredNode 37m node-controller Node functional-185027 event: Registered Node functional-185027 in Controller
Normal Starting 36m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 36m (x8 over 36m) kubelet Node functional-185027 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 36m (x8 over 36m) kubelet Node functional-185027 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 36m (x7 over 36m) kubelet Node functional-185027 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 36m kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 35m node-controller Node functional-185027 event: Registered Node functional-185027 in Controller
*
* ==> dmesg <==
* [Oct31 19:05] WSL2: Performing memory compaction.
[Oct31 19:06] WSL2: Performing memory compaction.
[Oct31 19:07] WSL2: Performing memory compaction.
[Oct31 19:08] WSL2: Performing memory compaction.
[Oct31 19:09] WSL2: Performing memory compaction.
[Oct31 19:10] WSL2: Performing memory compaction.
[Oct31 19:11] WSL2: Performing memory compaction.
[Oct31 19:12] WSL2: Performing memory compaction.
[Oct31 19:13] WSL2: Performing memory compaction.
[Oct31 19:14] WSL2: Performing memory compaction.
[Oct31 19:15] WSL2: Performing memory compaction.
[Oct31 19:16] WSL2: Performing memory compaction.
[Oct31 19:17] WSL2: Performing memory compaction.
[Oct31 19:18] WSL2: Performing memory compaction.
[Oct31 19:19] WSL2: Performing memory compaction.
[Oct31 19:20] WSL2: Performing memory compaction.
[Oct31 19:21] WSL2: Performing memory compaction.
[Oct31 19:22] WSL2: Performing memory compaction.
[Oct31 19:23] WSL2: Performing memory compaction.
[Oct31 19:24] WSL2: Performing memory compaction.
[Oct31 19:25] WSL2: Performing memory compaction.
[Oct31 19:26] WSL2: Performing memory compaction.
[Oct31 19:27] WSL2: Performing memory compaction.
[Oct31 19:28] WSL2: Performing memory compaction.
[Oct31 19:29] WSL2: Performing memory compaction.
*
* ==> etcd [0fb560da7772] <==
* {"level":"info","ts":"2022-10-31T18:58:08.929Z","caller":"traceutil/trace.go:171","msg":"trace[1229682372] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:899; }","duration":"901.7122ms","start":"2022-10-31T18:58:08.027Z","end":"2022-10-31T18:58:08.929Z","steps":["trace[1229682372] 'agreement among raft nodes before linearized reading' (duration: 883.0547ms)","trace[1229682372] 'range keys from in-memory index tree' (duration: 18.459ms)"],"step_count":2}
{"level":"warn","ts":"2022-10-31T18:58:08.929Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-31T18:58:08.027Z","time spent":"902.0287ms","remote":"127.0.0.1:42442","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
{"level":"warn","ts":"2022-10-31T18:58:18.251Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128016766590113802,"retry-timeout":"500ms"}
{"level":"warn","ts":"2022-10-31T18:58:18.614Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"1.0018742s","expected-duration":"1s"}
{"level":"info","ts":"2022-10-31T18:58:18.615Z","caller":"traceutil/trace.go:171","msg":"trace[2110014489] linearizableReadLoop","detail":"{readStateIndex:998; appliedIndex:998; }","duration":"864.6781ms","start":"2022-10-31T18:58:17.750Z","end":"2022-10-31T18:58:18.615Z","steps":["trace[2110014489] 'read index received' (duration: 864.6682ms)","trace[2110014489] 'applied index is now lower than readState.Index' (duration: 6.6µs)"],"step_count":2}
{"level":"warn","ts":"2022-10-31T18:58:18.615Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"588.1681ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2022-10-31T18:58:18.615Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"865.0901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:13520"}
{"level":"info","ts":"2022-10-31T18:58:18.615Z","caller":"traceutil/trace.go:171","msg":"trace[1502165834] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:907; }","duration":"865.146ms","start":"2022-10-31T18:58:17.750Z","end":"2022-10-31T18:58:18.615Z","steps":["trace[1502165834] 'agreement among raft nodes before linearized reading' (duration: 864.885ms)"],"step_count":1}
{"level":"warn","ts":"2022-10-31T18:58:18.615Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-31T18:58:17.750Z","time spent":"865.2175ms","remote":"127.0.0.1:42384","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":13544,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
{"level":"info","ts":"2022-10-31T18:58:18.615Z","caller":"traceutil/trace.go:171","msg":"trace[1655331472] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:907; }","duration":"588.3216ms","start":"2022-10-31T18:58:18.027Z","end":"2022-10-31T18:58:18.615Z","steps":["trace[1655331472] 'agreement among raft nodes before linearized reading' (duration: 588.0913ms)"],"step_count":1}
{"level":"warn","ts":"2022-10-31T18:58:18.616Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-31T18:58:18.027Z","time spent":"588.4654ms","remote":"127.0.0.1:42442","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
{"level":"info","ts":"2022-10-31T19:04:47.671Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":977}
{"level":"info","ts":"2022-10-31T19:04:47.673Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":977,"took":"1.5617ms"}
{"level":"warn","ts":"2022-10-31T19:06:35.919Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.504ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-10-31T19:06:35.920Z","caller":"traceutil/trace.go:171","msg":"trace[605096856] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:1263; }","duration":"106.7206ms","start":"2022-10-31T19:06:35.813Z","end":"2022-10-31T19:06:35.920Z","steps":["trace[605096856] 'count revisions from in-memory index tree' (duration: 106.1346ms)"],"step_count":1}
{"level":"info","ts":"2022-10-31T19:09:47.688Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1188}
{"level":"info","ts":"2022-10-31T19:09:47.689Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1188,"took":"1.272ms"}
{"level":"info","ts":"2022-10-31T19:14:47.708Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1398}
{"level":"info","ts":"2022-10-31T19:14:47.709Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1398,"took":"878.4µs"}
{"level":"info","ts":"2022-10-31T19:19:47.724Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1609}
{"level":"info","ts":"2022-10-31T19:19:47.725Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1609,"took":"811.3µs"}
{"level":"info","ts":"2022-10-31T19:24:47.745Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1818}
{"level":"info","ts":"2022-10-31T19:24:47.746Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1818,"took":"629.1µs"}
{"level":"info","ts":"2022-10-31T19:29:47.760Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2028}
{"level":"info","ts":"2022-10-31T19:29:47.761Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2028,"took":"679µs"}
*
* ==> etcd [32f90743c28f] <==
* {"level":"info","ts":"2022-10-31T18:53:58.616Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-10-31T18:53:58.616Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-10-31T18:54:00.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
{"level":"info","ts":"2022-10-31T18:54:00.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
{"level":"info","ts":"2022-10-31T18:54:00.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
{"level":"info","ts":"2022-10-31T18:54:00.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
{"level":"info","ts":"2022-10-31T18:54:00.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
{"level":"info","ts":"2022-10-31T18:54:00.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
{"level":"info","ts":"2022-10-31T18:54:00.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
{"level":"info","ts":"2022-10-31T18:54:00.213Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-185027 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2022-10-31T18:54:00.214Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-10-31T18:54:00.214Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-10-31T18:54:00.214Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-10-31T18:54:00.214Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-10-31T18:54:00.218Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
{"level":"info","ts":"2022-10-31T18:54:00.219Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-10-31T18:54:03.813Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-10-31T18:54:03.814Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"functional-185027","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
{"level":"warn","ts":"2022-10-31T18:54:03.814Z","caller":"embed/config_logging.go:160","msg":"rejected connection","remote-addr":"127.0.0.1:41436","server-name":"","ip-addresses":[],"dns-names":[],"error":"write tcp 127.0.0.1:2379->127.0.0.1:41436: use of closed network connection"}
WARNING: 2022/10/31 18:54:03 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/10/31 18:54:03 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-10-31T18:54:03.915Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2022-10-31T18:54:04.124Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-10-31T18:54:04.126Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-10-31T18:54:04.212Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"functional-185027","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
*
* ==> kernel <==
* 19:30:33 up 57 min, 0 users, load average: 0.67, 0.50, 0.61
Linux functional-185027 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kube-apiserver [51b1169513b9] <==
* I1031 18:54:54.741512 1 controller.go:616] quota admission added evaluator for: daemonsets.apps
I1031 18:54:54.794205 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1031 18:54:54.815110 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1031 18:55:26.198771 1 controller.go:616] quota admission added evaluator for: replicasets.apps
I1031 18:55:26.507931 1 alloc.go:327] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.107.136.235]
I1031 18:55:26.532901 1 controller.go:616] quota admission added evaluator for: endpoints
I1031 18:55:26.533355 1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1031 18:55:42.008994 1 alloc.go:327] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.98.252.28]
I1031 18:56:28.857581 1 trace.go:205] Trace[699037358]: "List(recursive=true) etcd3" audit-id:b248d1df-b38c-4a11-abcd-1f1ba92550bf,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (31-Oct-2022 18:56:27.843) (total time: 1014ms):
Trace[699037358]: [1.014434s] [1.014434s] END
I1031 18:56:28.858683 1 trace.go:205] Trace[1193676666]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:b248d1df-b38c-4a11-abcd-1f1ba92550bf,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (31-Oct-2022 18:56:27.843) (total time: 1015ms):
Trace[1193676666]: ---"Listing from storage done" 1014ms (18:56:28.857)
Trace[1193676666]: [1.0155663s] [1.0155663s] END
I1031 18:57:08.714697 1 alloc.go:327] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.110.252.224]
I1031 18:57:11.602009 1 alloc.go:327] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.101.92.50]
I1031 18:58:08.931360 1 trace.go:205] Trace[1500187899]: "List(recursive=true) etcd3" audit-id:43159608-ce1e-472e-b772-8b2b9b21d39a,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (31-Oct-2022 18:58:07.750) (total time: 1180ms):
Trace[1500187899]: [1.1809403s] [1.1809403s] END
I1031 18:58:08.932284 1 trace.go:205] Trace[1991741161]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:43159608-ce1e-472e-b772-8b2b9b21d39a,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (31-Oct-2022 18:58:07.750) (total time: 1181ms):
Trace[1991741161]: ---"Listing from storage done" 1181ms (18:58:08.931)
Trace[1991741161]: [1.1819596s] [1.1819596s] END
I1031 18:58:18.617642 1 trace.go:205] Trace[1518403917]: "List(recursive=true) etcd3" audit-id:8967bcf8-b0aa-4006-9864-6aa9bcafc551,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (31-Oct-2022 18:58:17.749) (total time: 868ms):
Trace[1518403917]: [868.3069ms] [868.3069ms] END
I1031 18:58:18.618448 1 trace.go:205] Trace[779588470]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:8967bcf8-b0aa-4006-9864-6aa9bcafc551,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (31-Oct-2022 18:58:17.749) (total time: 869ms):
Trace[779588470]: ---"Listing from storage done" 868ms (18:58:18.617)
Trace[779588470]: [869.1474ms] [869.1474ms] END
*
* ==> kube-apiserver [df60dad66a6a] <==
* I1031 18:54:22.946600 1 server.go:563] external host was not specified, using 192.168.49.2
I1031 18:54:22.948046 1 server.go:161] Version: v1.25.3
I1031 18:54:22.948293 1 server.go:163] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
E1031 18:54:22.948723 1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
*
* ==> kube-controller-manager [eda7672410bd] <==
* I1031 18:54:22.215059 1 serving.go:348] Generated self-signed cert in-memory
I1031 18:54:23.898067 1 controllermanager.go:178] Version: v1.25.3
I1031 18:54:23.898261 1 controllermanager.go:180] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1031 18:54:23.900268 1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I1031 18:54:23.900458 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1031 18:54:23.900512 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1031 18:54:23.900334 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
F1031 18:54:41.412879 1 controllermanager.go:221] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
*
* ==> kube-controller-manager [f49fc6e914c0] <==
* I1031 18:55:16.611776 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
I1031 18:55:16.611785 1 shared_informer.go:262] Caches are synced for namespace
I1031 18:55:16.611551 1 shared_informer.go:262] Caches are synced for endpoint_slice
I1031 18:55:16.611940 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
I1031 18:55:16.612050 1 shared_informer.go:262] Caches are synced for crt configmap
I1031 18:55:16.612075 1 shared_informer.go:262] Caches are synced for cronjob
I1031 18:55:16.612076 1 shared_informer.go:262] Caches are synced for certificate-csrapproving
I1031 18:55:16.612209 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
I1031 18:55:16.611651 1 shared_informer.go:262] Caches are synced for service account
I1031 18:55:16.611758 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
I1031 18:55:16.611684 1 shared_informer.go:262] Caches are synced for HPA
I1031 18:55:16.623752 1 shared_informer.go:262] Caches are synced for resource quota
I1031 18:55:16.714825 1 shared_informer.go:262] Caches are synced for resource quota
I1031 18:55:16.721401 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
I1031 18:55:16.723080 1 shared_informer.go:262] Caches are synced for disruption
I1031 18:55:17.043644 1 shared_informer.go:262] Caches are synced for garbage collector
I1031 18:55:17.043842 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1031 18:55:17.124380 1 shared_informer.go:262] Caches are synced for garbage collector
I1031 18:55:26.203595 1 event.go:294] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-5fcdfb5cc4 to 1"
I1031 18:55:26.255908 1 event.go:294] "Event occurred" object="default/hello-node-5fcdfb5cc4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-5fcdfb5cc4-sdlww"
I1031 18:55:40.914093 1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1031 18:57:08.395718 1 event.go:294] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-6458c8fb6f to 1"
I1031 18:57:08.407371 1 event.go:294] "Event occurred" object="default/hello-node-connect-6458c8fb6f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-6458c8fb6f-bntr4"
I1031 18:57:11.643963 1 event.go:294] "Event occurred" object="default/mysql" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-596b7fcdbf to 1"
I1031 18:57:11.746686 1 event.go:294] "Event occurred" object="default/mysql-596b7fcdbf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-596b7fcdbf-hgd9q"
*
* ==> kube-proxy [4508c6173fb7] <==
* I1031 18:54:53.219611 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I1031 18:54:53.222925 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I1031 18:54:53.226042 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I1031 18:54:53.229026 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I1031 18:54:53.308896 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
I1031 18:54:53.328260 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I1031 18:54:53.328411 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I1031 18:54:53.328579 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I1031 18:54:53.522190 1 server_others.go:206] "Using iptables Proxier"
I1031 18:54:53.522357 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I1031 18:54:53.522374 1 server_others.go:214] "Creating dualStackProxier for iptables"
I1031 18:54:53.522392 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I1031 18:54:53.522412 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1031 18:54:53.523238 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1031 18:54:53.523683 1 server.go:661] "Version info" version="v1.25.3"
I1031 18:54:53.523807 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1031 18:54:53.524572 1 config.go:317] "Starting service config controller"
I1031 18:54:53.524838 1 shared_informer.go:255] Waiting for caches to sync for service config
I1031 18:54:53.525025 1 config.go:444] "Starting node config controller"
I1031 18:54:53.525039 1 shared_informer.go:255] Waiting for caches to sync for node config
I1031 18:54:53.525742 1 config.go:226] "Starting endpoint slice config controller"
I1031 18:54:53.525895 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I1031 18:54:53.625826 1 shared_informer.go:262] Caches are synced for node config
I1031 18:54:53.625990 1 shared_informer.go:262] Caches are synced for service config
I1031 18:54:53.626094 1 shared_informer.go:262] Caches are synced for endpoint slice config
*
* ==> kube-proxy [7963ee55b990] <==
* E1031 18:53:57.830873 1 proxier.go:656] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
I1031 18:53:57.920379 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I1031 18:53:57.926047 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I1031 18:53:58.015829 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I1031 18:53:58.019603 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I1031 18:53:58.022714 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
E1031 18:53:58.028098 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-185027": dial tcp 192.168.49.2:8441: connect: connection refused
*
* ==> kube-scheduler [4bd7ccaf2ab0] <==
* I1031 18:53:59.721320 1 serving.go:348] Generated self-signed cert in-memory
*
* ==> kube-scheduler [90823b98eee0] <==
* E1031 18:54:42.391501 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?resourceVersion=540": dial tcp 192.168.49.2:8441: connect: connection refused
W1031 18:54:45.990936 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8441/apis/apps/v1/statefulsets?resourceVersion=540": dial tcp 192.168.49.2:8441: connect: connection refused
E1031 18:54:45.991065 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8441/apis/apps/v1/statefulsets?resourceVersion=540": dial tcp 192.168.49.2:8441: connect: connection refused
W1031 18:54:51.521331 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1031 18:54:51.521382 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W1031 18:54:51.521563 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1031 18:54:51.521590 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W1031 18:54:51.521726 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E1031 18:54:51.521840 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W1031 18:54:51.521988 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1031 18:54:51.522013 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W1031 18:54:51.522113 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E1031 18:54:51.522135 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W1031 18:54:51.522170 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E1031 18:54:51.522201 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W1031 18:54:51.522206 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1031 18:54:51.522237 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W1031 18:54:51.522355 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1031 18:54:51.522473 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W1031 18:54:51.522627 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1031 18:54:51.522659 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W1031 18:54:51.620397 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1031 18:54:51.620545 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W1031 18:54:51.620655 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1031 18:54:51.620774 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
*
* ==> kubelet <==
* -- Logs begin at Mon 2022-10-31 18:51:06 UTC, end at Mon 2022-10-31 19:30:34 UTC. --
Oct 31 18:56:37 functional-185027 kubelet[10428]: I1031 18:56:37.946939 10428 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8b59\" (UniqueName: \"kubernetes.io/projected/2712abfa-558c-4ba1-ae77-24775c4a987b-kube-api-access-d8b59\") pod \"2712abfa-558c-4ba1-ae77-24775c4a987b\" (UID: \"2712abfa-558c-4ba1-ae77-24775c4a987b\") "
Oct 31 18:56:37 functional-185027 kubelet[10428]: I1031 18:56:37.947318 10428 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2712abfa-558c-4ba1-ae77-24775c4a987b-pvc-f988d02a-b47b-4fee-8236-57c064e8972f" (OuterVolumeSpecName: "mypd") pod "2712abfa-558c-4ba1-ae77-24775c4a987b" (UID: "2712abfa-558c-4ba1-ae77-24775c4a987b"). InnerVolumeSpecName "pvc-f988d02a-b47b-4fee-8236-57c064e8972f". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct 31 18:56:37 functional-185027 kubelet[10428]: I1031 18:56:37.952166 10428 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2712abfa-558c-4ba1-ae77-24775c4a987b-kube-api-access-d8b59" (OuterVolumeSpecName: "kube-api-access-d8b59") pod "2712abfa-558c-4ba1-ae77-24775c4a987b" (UID: "2712abfa-558c-4ba1-ae77-24775c4a987b"). InnerVolumeSpecName "kube-api-access-d8b59". PluginName "kubernetes.io/projected", VolumeGidValue ""
Oct 31 18:56:38 functional-185027 kubelet[10428]: I1031 18:56:38.047195 10428 reconciler.go:399] "Volume detached for volume \"pvc-f988d02a-b47b-4fee-8236-57c064e8972f\" (UniqueName: \"kubernetes.io/host-path/2712abfa-558c-4ba1-ae77-24775c4a987b-pvc-f988d02a-b47b-4fee-8236-57c064e8972f\") on node \"functional-185027\" DevicePath \"\""
Oct 31 18:56:38 functional-185027 kubelet[10428]: I1031 18:56:38.047384 10428 reconciler.go:399] "Volume detached for volume \"kube-api-access-d8b59\" (UniqueName: \"kubernetes.io/projected/2712abfa-558c-4ba1-ae77-24775c4a987b-kube-api-access-d8b59\") on node \"functional-185027\" DevicePath \"\""
Oct 31 18:56:38 functional-185027 kubelet[10428]: I1031 18:56:38.772882 10428 scope.go:115] "RemoveContainer" containerID="5996073b51e5959c0c555abd9cd9899e5addf05ef8cfd42e323ff5837a6ecff6"
Oct 31 18:56:39 functional-185027 kubelet[10428]: I1031 18:56:39.247699 10428 topology_manager.go:205] "Topology Admit Handler"
Oct 31 18:56:39 functional-185027 kubelet[10428]: E1031 18:56:39.247918 10428 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="2712abfa-558c-4ba1-ae77-24775c4a987b" containerName="myfrontend"
Oct 31 18:56:39 functional-185027 kubelet[10428]: I1031 18:56:39.247996 10428 memory_manager.go:345] "RemoveStaleState removing state" podUID="2712abfa-558c-4ba1-ae77-24775c4a987b" containerName="myfrontend"
Oct 31 18:56:39 functional-185027 kubelet[10428]: I1031 18:56:39.357145 10428 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx8z5\" (UniqueName: \"kubernetes.io/projected/d3b3a0ae-da73-4a83-9452-16e5de553bb9-kube-api-access-qx8z5\") pod \"sp-pod\" (UID: \"d3b3a0ae-da73-4a83-9452-16e5de553bb9\") " pod="default/sp-pod"
Oct 31 18:56:39 functional-185027 kubelet[10428]: I1031 18:56:39.357691 10428 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f988d02a-b47b-4fee-8236-57c064e8972f\" (UniqueName: \"kubernetes.io/host-path/d3b3a0ae-da73-4a83-9452-16e5de553bb9-pvc-f988d02a-b47b-4fee-8236-57c064e8972f\") pod \"sp-pod\" (UID: \"d3b3a0ae-da73-4a83-9452-16e5de553bb9\") " pod="default/sp-pod"
Oct 31 18:56:39 functional-185027 kubelet[10428]: I1031 18:56:39.950307 10428 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2712abfa-558c-4ba1-ae77-24775c4a987b path="/var/lib/kubelet/pods/2712abfa-558c-4ba1-ae77-24775c4a987b/volumes"
Oct 31 18:57:08 functional-185027 kubelet[10428]: I1031 18:57:08.425750 10428 topology_manager.go:205] "Topology Admit Handler"
Oct 31 18:57:08 functional-185027 kubelet[10428]: I1031 18:57:08.597651 10428 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7s2b\" (UniqueName: \"kubernetes.io/projected/023cd9cc-8f0f-43c7-a7f2-a289a41208bc-kube-api-access-n7s2b\") pod \"hello-node-connect-6458c8fb6f-bntr4\" (UID: \"023cd9cc-8f0f-43c7-a7f2-a289a41208bc\") " pod="default/hello-node-connect-6458c8fb6f-bntr4"
Oct 31 18:57:09 functional-185027 kubelet[10428]: I1031 18:57:09.713355 10428 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="1407a2530d626fbd59f10a611f189357b182f7e885093855ab809e76c4a78c59"
Oct 31 18:57:12 functional-185027 kubelet[10428]: I1031 18:57:12.124833 10428 topology_manager.go:205] "Topology Admit Handler"
Oct 31 18:57:12 functional-185027 kubelet[10428]: I1031 18:57:12.221554 10428 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szwrp\" (UniqueName: \"kubernetes.io/projected/807867bc-2c97-4860-81f1-9082360d003f-kube-api-access-szwrp\") pod \"mysql-596b7fcdbf-hgd9q\" (UID: \"807867bc-2c97-4860-81f1-9082360d003f\") " pod="default/mysql-596b7fcdbf-hgd9q"
Oct 31 18:57:13 functional-185027 kubelet[10428]: I1031 18:57:13.412443 10428 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="fcffcaaeb0a34bff84f0dd123f83a69a6aabe50efd836584396070f5b24500cc"
Oct 31 18:59:18 functional-185027 kubelet[10428]: W1031 18:59:18.117973 10428 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 31 19:04:18 functional-185027 kubelet[10428]: W1031 19:04:18.119087 10428 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 31 19:09:18 functional-185027 kubelet[10428]: W1031 19:09:18.120936 10428 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 31 19:14:18 functional-185027 kubelet[10428]: W1031 19:14:18.119449 10428 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 31 19:19:18 functional-185027 kubelet[10428]: W1031 19:19:18.125214 10428 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 31 19:24:18 functional-185027 kubelet[10428]: W1031 19:24:18.125420 10428 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 31 19:29:18 functional-185027 kubelet[10428]: W1031 19:29:18.128137 10428 sysinfo.go:203] Nodes topology is not available, providing CPU topology
*
* ==> storage-provisioner [c5a5b14d2885] <==
* I1031 18:54:46.519300 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1031 18:54:46.528297 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
*
* ==> storage-provisioner [f70568e3c865] <==
* I1031 18:55:14.717928 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1031 18:55:14.740244 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1031 18:55:14.740529 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1031 18:55:32.248497 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1031 18:55:32.248855 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-185027_b3cb73b1-28d2-43a5-b834-df9c715eac6c!
I1031 18:55:32.250425 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"467f581a-d6ee-4cd2-8df2-db75069e6693", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-185027_b3cb73b1-28d2-43a5-b834-df9c715eac6c became leader
I1031 18:55:32.350887 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-185027_b3cb73b1-28d2-43a5-b834-df9c715eac6c!
I1031 18:55:40.913003 1 controller.go:1332] provision "default/myclaim" class "standard": started
I1031 18:55:40.913332 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard c78b7eea-85e6-4ea5-97ab-04671f590853 375 0 2022-10-31 18:52:01 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-10-31 18:52:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-f988d02a-b47b-4fee-8236-57c064e8972f &PersistentVolumeClaim{ObjectMeta:{myclaim default f988d02a-b47b-4fee-8236-57c064e8972f 694 0 2022-10-31 18:55:40 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2022-10-31 18:55:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2022-10-31 18:55:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I1031 18:55:40.914106 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f988d02a-b47b-4fee-8236-57c064e8972f", APIVersion:"v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I1031 18:55:40.914452 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-f988d02a-b47b-4fee-8236-57c064e8972f" provisioned
I1031 18:55:40.914590 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I1031 18:55:40.914602 1 volume_store.go:212] Trying to save persistentvolume "pvc-f988d02a-b47b-4fee-8236-57c064e8972f"
I1031 18:55:40.929096 1 volume_store.go:219] persistentvolume "pvc-f988d02a-b47b-4fee-8236-57c064e8972f" saved
I1031 18:55:40.929450 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f988d02a-b47b-4fee-8236-57c064e8972f", APIVersion:"v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f988d02a-b47b-4fee-8236-57c064e8972f
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-185027 -n functional-185027
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-185027 -n functional-185027: (1.700843s)
helpers_test.go:261: (dbg) Run: kubectl --context functional-185027 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context functional-185027 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-185027 describe pod : exit status 1 (184.1983ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context functional-185027 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (2110.78s)