=== RUN TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run: kubectl --context functional-20220601175654-3412 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run: kubectl --context functional-20220601175654-3412 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-6c9nb" [6dff7187-bcdf-4179-b4f5-61f1663b106c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
=== CONT TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-6c9nb" [6dff7187-bcdf-4179-b4f5-61f1663b106c] Running
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 7.03154s
functional_test.go:1448: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 service list
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 service list: (6.9155626s)
functional_test.go:1462: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 service --namespace=default --https --url hello-node
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1391: Failed to sent interrupt to proc not supported by windows
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 service --namespace=default --https --url hello-node: exit status 1 (32m0.03571s)
-- stdout --
https://127.0.0.1:58749
-- /stdout --
** stderr **
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
** /stderr **
functional_test.go:1464: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-20220601175654-3412 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1401: service test failed - dumping debug information
functional_test.go:1402: -----------------------service failure post-mortem--------------------------------
functional_test.go:1405: (dbg) Run: kubectl --context functional-20220601175654-3412 describe po hello-node
functional_test.go:1409: hello-node pod describe:
Name: hello-node-54fbb85-6c9nb
Namespace: default
Priority: 0
Node: functional-20220601175654-3412/192.168.49.2
Start Time: Wed, 01 Jun 2022 18:04:39 +0000
Labels: app=hello-node
pod-template-hash=54fbb85
Annotations: <none>
Status: Running
IP: 172.17.0.7
IPs:
IP: 172.17.0.7
Controlled By: ReplicaSet/hello-node-54fbb85
Containers:
echoserver:
Container ID: docker://6a459f606dda58dc39cfd752f58f019af44834a39e66aacbb47dfbc0a96d47b5
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 01 Jun 2022 18:04:41 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2vcx9 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-2vcx9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-54fbb85-6c9nb to functional-20220601175654-3412
Normal Pulled 32m kubelet, functional-20220601175654-3412 Container image "k8s.gcr.io/echoserver:1.8" already present on machine
Normal Created 32m kubelet, functional-20220601175654-3412 Created container echoserver
Normal Started 32m kubelet, functional-20220601175654-3412 Started container echoserver
Name: hello-node-connect-74cf8bc446-kpkgl
Namespace: default
Priority: 0
Node: functional-20220601175654-3412/192.168.49.2
Start Time: Wed, 01 Jun 2022 18:04:21 +0000
Labels: app=hello-node-connect
pod-template-hash=74cf8bc446
Annotations: <none>
Status: Running
IP: 172.17.0.6
IPs:
IP: 172.17.0.6
Controlled By: ReplicaSet/hello-node-connect-74cf8bc446
Containers:
echoserver:
Container ID: docker://bb5e57c07e7877956fb985ba62a3029ede63712bf2511d85c045edc4cd745b8e
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 01 Jun 2022 18:04:33 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-glnhl (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-glnhl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-connect-74cf8bc446-kpkgl to functional-20220601175654-3412
Normal Pulling 32m kubelet, functional-20220601175654-3412 Pulling image "k8s.gcr.io/echoserver:1.8"
Normal Pulled 32m kubelet, functional-20220601175654-3412 Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 10.5925535s
Normal Created 32m kubelet, functional-20220601175654-3412 Created container echoserver
Normal Started 32m kubelet, functional-20220601175654-3412 Started container echoserver
functional_test.go:1411: (dbg) Run: kubectl --context functional-20220601175654-3412 logs -l app=hello-node
functional_test.go:1415: hello-node logs:
functional_test.go:1417: (dbg) Run: kubectl --context functional-20220601175654-3412 describe svc hello-node
functional_test.go:1421: hello-node svc describe:
Name: hello-node
Namespace: default
Labels: app=hello-node
Annotations: <none>
Selector: app=hello-node
Type: NodePort
IP: 10.100.28.255
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30534/TCP
Endpoints: 172.17.0.7:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect functional-20220601175654-3412
helpers_test.go:231: (dbg) Done: docker inspect functional-20220601175654-3412: (1.050639s)
helpers_test.go:235: (dbg) docker inspect functional-20220601175654-3412:
-- stdout --
[
{
"Id": "fcdaf16a6a52b400f39553431ba91b6d86d477c93a73030d77c141809b7d607f",
"Created": "2022-06-01T17:57:47.7481206Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 20452,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-06-01T17:57:48.7685373Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
"ResolvConfPath": "/var/lib/docker/containers/fcdaf16a6a52b400f39553431ba91b6d86d477c93a73030d77c141809b7d607f/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/fcdaf16a6a52b400f39553431ba91b6d86d477c93a73030d77c141809b7d607f/hostname",
"HostsPath": "/var/lib/docker/containers/fcdaf16a6a52b400f39553431ba91b6d86d477c93a73030d77c141809b7d607f/hosts",
"LogPath": "/var/lib/docker/containers/fcdaf16a6a52b400f39553431ba91b6d86d477c93a73030d77c141809b7d607f/fcdaf16a6a52b400f39553431ba91b6d86d477c93a73030d77c141809b7d607f-json.log",
"Name": "/functional-20220601175654-3412",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-20220601175654-3412:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-20220601175654-3412",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4194304000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/d711ffab7b1d4af0fe3d4b62a0d56d2fcd50c181ce03af8a939b4c2a212e46c2-init/diff:/var/lib/docker/overlay2/487b259deb346e6ca1e96023cfc1832638489725b45384e10e2c2effe462993c/diff:/var/lib/docker/overlay2/7830a7ee158a10893945c1b577efeb821d499cce7646d95d3c0cffb3ed372dca/diff:/var/lib/docker/overlay2/6fe83b204fd4124b69c52dc2b8620b75ac92764b58a8d1af6662ff240e517719/diff:/var/lib/docker/overlay2/6362560b46c9fab8d6514c8429f6275481f64020b6a76226333ec63d40b3509c/diff:/var/lib/docker/overlay2/b947dedac2c38cb9982c9b363e89606d658250ef2798320fdf3517f747048abd/diff:/var/lib/docker/overlay2/bc2839e6d5fd56592e9530bb7f1f81ed9502bdb7539e7f429732e9cf4cd3b17d/diff:/var/lib/docker/overlay2/1b3239e13a55e9fa626a7541842d884445974471039cc2d9226ad10f2b953536/diff:/var/lib/docker/overlay2/1884c2d81ecac540a3174fb86cefef2fd199eaa5c78d29afe6c63aff263f9584/diff:/var/lib/docker/overlay2/d1c361312180db411937b7786e1329e12f9ed7b9439d4574d6d9a237a8ef8a9e/diff:/var/lib/docker/overlay2/15125b
9e77872950f8bc77e7ec27026feb64d93311200f76586c570bbceb3810/diff:/var/lib/docker/overlay2/1778c10167346a2b58dd494e4689512b56050eed4b6df53a451f9aa373c3af35/diff:/var/lib/docker/overlay2/e45fa45d984d0fdd2eaca3b15c5e81abaa51b6b84fc051f20678d16cb6548a34/diff:/var/lib/docker/overlay2/54cea2bf354fab8e2c392a574195b06b919122ff6a1fb01b05f554ba43d9719e/diff:/var/lib/docker/overlay2/8667e3403c29f1a18aaababc226712f548d7dd623a4b9ac413520cf72955fb40/diff:/var/lib/docker/overlay2/5d20284be4fd7015d5b8eb6ae55b108a262e3c66cdaa9a8c4c23a6eb1726d4da/diff:/var/lib/docker/overlay2/d623242b443d7de7f75761cda756115d0f9df9f3b73144554928ceac06876d5b/diff:/var/lib/docker/overlay2/143dd7f527aa222e0eeaafe5e0182140c95e402aa335e7994b2aa7f1e6b6ba3c/diff:/var/lib/docker/overlay2/d690aea98cc6cb39fdd3f6660997b792085628157b14d576701adc72d3e6cf55/diff:/var/lib/docker/overlay2/2bb1d07709342e3bcb4feda7dc7d17fa9707986bf88cd7dc52eab255748276e0/diff:/var/lib/docker/overlay2/ea79e7f8097cf29c435b8a18ee6332b067ec4f7858b6eaabf897d2076a8deb3e/diff:/var/lib/d
ocker/overlay2/dab209c0bb58d228f914118438b0a79649c46857e6fcb416c0c556c049154f5d/diff:/var/lib/docker/overlay2/3bd421aaea3202bb8715cdd0f452aa411f20f2025b05d6a03811ebc7d0347896/diff:/var/lib/docker/overlay2/7dc112f5a6dc7809e579b2eaaeef54d3d5ee1326e9f35817dad641bc4e2c095a/diff:/var/lib/docker/overlay2/772b23d424621d351ce90f47e351441dc7fb224576441813bb86be52c0552022/diff:/var/lib/docker/overlay2/86ea33f163c6d58acb53a8e5bb27e1c131a6c915d7459ca03c90383b299fde58/diff:/var/lib/docker/overlay2/58deaba6fb571643d48dd090dd850eeb8fd343f41591580f4509fe61280e87de/diff:/var/lib/docker/overlay2/d8e5be8b94fe5858e777434bd7d360128719def82a5e7946fd4cb69aecab39fe/diff:/var/lib/docker/overlay2/a319e02b15899f20f933362a00fa40be829441edea2a0be36cc1e30b3417cf57/diff:/var/lib/docker/overlay2/b315efdf7f2b5f50f74664829533097f21ab8bda14478b76e9b5781079830b20/diff:/var/lib/docker/overlay2/bb96faec132eb5919c94fc772f61e63514308af6f72ec141483a94a85a77cc3b/diff:/var/lib/docker/overlay2/55dbff36528117ad96b3be9ee2396f7faee2f0b493773aa5abf5ba2b23a
5f728/diff:/var/lib/docker/overlay2/f11da52264a1f34c3b2180d2adcbcb7cc077c7f91611974bf0946d6bea248de5/diff:/var/lib/docker/overlay2/6ca19b0a8327fcd8f60b06c6b0f4519ff5f0f3eacd034e6c5c16ed45239f2238/diff:/var/lib/docker/overlay2/f86ed588a9cb5b359a174312bf8595e8e896ba3d8922b0bae1d8839518d24fb6/diff:/var/lib/docker/overlay2/0bf0e1906e62c903f71626646e2339b8e2c809d40948898d803dcaf0218ed0dd/diff:/var/lib/docker/overlay2/c8ff277ec5a9fa0db24ad64c7e0523b2b5a5c7d64f2148a0c9823fdd5bc60cad/diff:/var/lib/docker/overlay2/4cfbf9fc2a4a968773220ae74312f07a616afc80cbf9a4b68e2c2357c09ca009/diff:/var/lib/docker/overlay2/9a235e4b15bee3f10260f9356535723bf351a49b1f19af094d59a1439b7a9632/diff:/var/lib/docker/overlay2/9699d245a454ce1e21f1ac875a0910a63fb34d3d2870f163d8b6d258f33c2f4f/diff:/var/lib/docker/overlay2/6e093a9dfe282a2a53a4081251541e0c5b4176bb42d9c9bf908f19b1fdc577f5/diff:/var/lib/docker/overlay2/98036438a55a1794d298c11dc1eb0633e06ed433b84d24a3972e634a0b11deb0/diff",
"MergedDir": "/var/lib/docker/overlay2/d711ffab7b1d4af0fe3d4b62a0d56d2fcd50c181ce03af8a939b4c2a212e46c2/merged",
"UpperDir": "/var/lib/docker/overlay2/d711ffab7b1d4af0fe3d4b62a0d56d2fcd50c181ce03af8a939b4c2a212e46c2/diff",
"WorkDir": "/var/lib/docker/overlay2/d711ffab7b1d4af0fe3d4b62a0d56d2fcd50c181ce03af8a939b4c2a212e46c2/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-20220601175654-3412",
"Source": "/var/lib/docker/volumes/functional-20220601175654-3412/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-20220601175654-3412",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-20220601175654-3412",
"name.minikube.sigs.k8s.io": "functional-20220601175654-3412",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "42b735bd10a83f995a09f31f236acd7116ce6887781c1e4894ffa72ada936b18",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "58393"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "58389"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "58390"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "58391"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "58392"
}
]
},
"SandboxKey": "/var/run/docker/netns/42b735bd10a8",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-20220601175654-3412": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"fcdaf16a6a52",
"functional-20220601175654-3412"
],
"NetworkID": "db9a83a2c966b245ab10d1a9620cf47ee96af9f394e8fcf24c9b12fc208bb76c",
"EndpointID": "79daea23eb5020922fd179fb1458a617d7253a0ea49311a3c3846f0edf0dd161",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601175654-3412 -n functional-20220601175654-3412
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601175654-3412 -n functional-20220601175654-3412: (6.2857054s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 logs -n 25: (8.2058881s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs:
-- stdout --
*
* ==> Audit <==
* |----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|----------------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|----------------|---------------------|---------------------|
| ssh | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
| | ssh sudo cat | | | | | |
| | /etc/ssl/certs/3ec20f2e.0 | | | | | |
| cp | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
| | cp testdata\cp-test.txt | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| image | functional-20220601175654-3412 image load --daemon | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220601175654-3412 | | | | | |
| ssh | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
| | ssh -n | | | | | |
| | functional-20220601175654-3412 | | | | | |
| | sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| image | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
| | image ls | | | | | |
| cp | functional-20220601175654-3412 cp functional-20220601175654-3412:/home/docker/cp-test.txt | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
| | C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd3371800001\001\cp-test.txt | | | | | |
| image | functional-20220601175654-3412 image save | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220601175654-3412 | | | | | |
| | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar | | | | | |
| ssh | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
| | ssh -n | | | | | |
| | functional-20220601175654-3412 | | | | | |
| | sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| image | functional-20220601175654-3412 image rm | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220601175654-3412 | | | | | |
| addons | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
| | addons list | | | | | |
| addons | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
| | addons list -o json | | | | | |
| image | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
| | image ls | | | | | |
| image | functional-20220601175654-3412 image load | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
| | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar | | | | | |
| image | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:04 GMT | 01 Jun 22 18:04 GMT |
| | image ls | | | | | |
| image | functional-20220601175654-3412 image save --daemon | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:04 GMT | 01 Jun 22 18:04 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220601175654-3412 | | | | | |
| update-context | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:04 GMT | 01 Jun 22 18:04 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| service | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:04 GMT | 01 Jun 22 18:04 GMT |
| | service list | | | | | |
| update-context | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:04 GMT | 01 Jun 22 18:04 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:04 GMT | 01 Jun 22 18:05 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| image | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:05 GMT | 01 Jun 22 18:05 GMT |
| | image ls --format short | | | | | |
| image | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:05 GMT | 01 Jun 22 18:05 GMT |
| | image ls --format yaml | | | | | |
| image | functional-20220601175654-3412 image build -t | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:05 GMT | 01 Jun 22 18:05 GMT |
| | localhost/my-image:functional-20220601175654-3412 | | | | | |
| | testdata\build | | | | | |
| image | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:05 GMT | 01 Jun 22 18:05 GMT |
| | image ls | | | | | |
| image | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:05 GMT | 01 Jun 22 18:05 GMT |
| | image ls --format json | | | | | |
| image | functional-20220601175654-3412 | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:05 GMT | 01 Jun 22 18:05 GMT |
| | image ls --format table | | | | | |
|----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|----------------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/06/01 18:02:27
Running on machine: minikube4
Binary: Built with gc go1.18.2 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0601 18:02:27.003550 8928 out.go:296] Setting OutFile to fd 992 ...
I0601 18:02:27.059177 8928 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0601 18:02:27.059177 8928 out.go:309] Setting ErrFile to fd 712...
I0601 18:02:27.059177 8928 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0601 18:02:27.071179 8928 out.go:303] Setting JSON to false
I0601 18:02:27.074171 8928 start.go:115] hostinfo: {"hostname":"minikube4","uptime":66662,"bootTime":1654039885,"procs":169,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
W0601 18:02:27.074171 8928 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0601 18:02:27.077211 8928 out.go:177] * [functional-20220601175654-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
I0601 18:02:27.080206 8928 out.go:177] - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
I0601 18:02:27.082181 8928 out.go:177] - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
I0601 18:02:27.085184 8928 out.go:177] - MINIKUBE_LOCATION=14079
I0601 18:02:27.087175 8928 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0601 18:02:27.089183 8928 config.go:178] Loaded profile config "functional-20220601175654-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0601 18:02:27.090206 8928 driver.go:358] Setting default libvirt URI to qemu:///system
I0601 18:02:29.982611 8928 docker.go:137] docker version: linux-20.10.14
I0601 18:02:29.989615 8928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0601 18:02:32.127496 8928 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.137771s)
I0601 18:02:32.128411 8928 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:61 OomKillDisable:true NGoroutines:62 SystemTime:2022-06-01 18:02:31.068937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0601 18:02:32.132513 8928 out.go:177] * Using the docker driver based on existing profile
I0601 18:02:32.135191 8928 start.go:284] selected driver: docker
I0601 18:02:32.135191 8928 start.go:806] validating driver "docker" against &{Name:functional-20220601175654-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601175654-3412 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regis
try:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0601 18:02:32.135191 8928 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0601 18:02:32.155422 8928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0601 18:02:34.213193 8928 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0575405s)
I0601 18:02:34.213337 8928 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:52 SystemTime:2022-06-01 18:02:33.2036185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0601 18:02:34.258691 8928 cni.go:95] Creating CNI manager for ""
I0601 18:02:34.258691 8928 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0601 18:02:34.258691 8928 start_flags.go:306] config:
{Name:functional-20220601175654-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601175654-3412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true s
torage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0601 18:02:34.262480 8928 out.go:177] * dry-run validation complete!
*
* ==> Docker <==
* -- Logs begin at Wed 2022-06-01 17:57:49 UTC, end at Wed 2022-06-01 18:37:08 UTC. --
Jun 01 17:59:05 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T17:59:05.029235700Z" level=info msg="ignoring event" container=2ce0e4b38ac9f04643054592aef152247b94aae05e441b0889a44932c71646b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.626025400Z" level=info msg="ignoring event" container=e488dffa538a17d992d20869cc00e78495d271a2451bbb95f332b9135ed6c4ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.637360900Z" level=info msg="ignoring event" container=c7bb2447fa954e0f80325f625506f591267a93d4006a17435d6f60339e195cd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.637419900Z" level=info msg="ignoring event" container=dc621d8e12bb731b9e13f7ae612a8e8abdf02c5d57b50537217ea82c9f40ea93 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.637628500Z" level=info msg="ignoring event" container=3253cbf1f4fe95a99732cce1ed9d390cd32de41ee4445e6aa46737745c931a0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.736178000Z" level=info msg="ignoring event" container=cb3703c9cc9afd86e857f1f5379232c178abe81a94549714e3a7ee8e262075bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.828329200Z" level=info msg="ignoring event" container=6ec6e3224eb37da6c6e69453bed14d1b984679895b31b867dd26575c033d0777 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.828774500Z" level=info msg="ignoring event" container=49aa6506b35f2e342010fe2d637e7622ef30d37f5be470c464871bb6c877ac88 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.833784800Z" level=info msg="ignoring event" container=fd5dc025117b1e848f61caf495dc31bb95016f08def18176c72d985bea9b01fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.836713100Z" level=info msg="ignoring event" container=39af5bb7132841c3549814e4089e8ccab9cd4dcaa3f317ef2376ef52cfd9d5b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.926707900Z" level=info msg="ignoring event" container=54088515f3b1f2404dc743e506c50bb29f3bb7ccce2e493e1cd7878f9c2152dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.929511800Z" level=info msg="ignoring event" container=c85d97d43a303c9dfc5d9402d2af2d0a181576f005b3c743a841b2cea4699d18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:10 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:10.628746400Z" level=info msg="ignoring event" container=be5f34a9364f1696bb7ce89806fc3241fe5cd8dd9f251a707ad487ca889cc29e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:11 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:11.138211300Z" level=info msg="ignoring event" container=1d64c64b4d6316f982234fe788dbd2f7eea1b7bb4b882e0d7a7609c53aa3eca7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:11 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:11.330436200Z" level=info msg="ignoring event" container=d42dad7731a020c158dd22012ba5aab4c0f5071c8fffef9e20fc3ea1587e66b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:13 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:13.433350100Z" level=info msg="ignoring event" container=8d9da6f17adb539a53056a3d32608b9689314dd2abf921647b1813a9d2e24fcf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:23 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:23.735898300Z" level=info msg="ignoring event" container=3cb91cda9605cf6855868db610bd5e0c407deef941823fda7c4cb3588cb002c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:24 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:24.743151000Z" level=info msg="ignoring event" container=b596035b2d603ed478d3b289168127e1479b806e29e73f62d30109d4b076dae0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:24 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:24.940279000Z" level=info msg="ignoring event" container=4a4e5a3bb4ea866446cc1b3437e2462ebbc2277a5bc597bbbd9d38adb928fa81 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:25 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:25.146467200Z" level=info msg="ignoring event" container=249f3b6cebd0c3db2be3865994712e1487ff6b8b8d2b2931d0a40598b885b94a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:01:25 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:25.852056800Z" level=info msg="ignoring event" container=34197b3df9eb24955f2e2148de3a663881e459c57bca2d06639f836350b00930 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:04:29 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:04:29.437397600Z" level=info msg="ignoring event" container=428f2721d941ee9d29bee164b4a1e72f74826bea609f3fd8f3b28943beaba0f5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:04:30 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:04:30.842686800Z" level=info msg="ignoring event" container=e2ab857ff931d067ffedd71ba632df14981b57ce80d29351a49991e38c08c79c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:05:22 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:05:22.827220500Z" level=info msg="ignoring event" container=815eebbfb5932518e2a8ae234e1a7d9fd526c3d612fce947e84ab01236dfb725 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 18:05:23 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:05:23.452415400Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
6a459f606dda5 82e4c8a736a4f 32 minutes ago Running echoserver 0 2af19e9c1aff9
3f4995e01d5df nginx@sha256:2bcabc23b45489fb0885d69a06ba1d648aeda973fae7bb981bafbb884165e514 32 minutes ago Running myfrontend 0 d31489858fe70
bb5e57c07e787 k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 32 minutes ago Running echoserver 0 0a3729a925ea3
c2fee58ceb383 nginx@sha256:a74534e76ee1121d418fa7394ca930eb67440deda413848bc67c68138535b989 33 minutes ago Running nginx 0 91606cdc227db
fc1ceb8f5f911 mysql@sha256:7e99b2b8d5bca914ef31059858210f57b009c40375d647f0d4d65ecd01d6b1d5 33 minutes ago Running mysql 0 2f251b72b2974
e059ac677a6c9 6e38f40d628db 35 minutes ago Running storage-provisioner 3 1f3103e42f8a7
2efa890199e80 df7b72818ad2e 35 minutes ago Running kube-controller-manager 2 0255071d48864
9d9490ac77090 a4ca41631cc7a 35 minutes ago Running coredns 1 575aa12ff7773
ce413f7f994b9 8fa62c12256df 35 minutes ago Running kube-apiserver 1 2570071be0701
249f3b6cebd0c 6e38f40d628db 35 minutes ago Exited storage-provisioner 2 1f3103e42f8a7
3cb91cda9605c 8fa62c12256df 35 minutes ago Exited kube-apiserver 0 2570071be0701
f8476e9b4b726 595f327f224a4 35 minutes ago Running kube-scheduler 1 03e84dc342a31
26e97c628a456 25f8c7f3da61c 35 minutes ago Running etcd 1 d73eb68b51a3b
9ee2f9d1ae9d7 4c03754524064 35 minutes ago Running kube-proxy 1 3ee0efaba4f8b
34197b3df9eb2 df7b72818ad2e 35 minutes ago Exited kube-controller-manager 1 0255071d48864
8d9da6f17adb5 a4ca41631cc7a 38 minutes ago Exited coredns 0 fd5dc025117b1
54088515f3b1f 4c03754524064 38 minutes ago Exited kube-proxy 0 6ec6e3224eb37
c85d97d43a303 25f8c7f3da61c 38 minutes ago Exited etcd 0 3253cbf1f4fe9
1d64c64b4d631 595f327f224a4 38 minutes ago Exited kube-scheduler 0 39af5bb713284
*
* ==> coredns [8d9da6f17adb] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
[INFO] Reloading complete
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> coredns [9d9490ac7709] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> describe nodes <==
* Name: functional-20220601175654-3412
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-20220601175654-3412
kubernetes.io/os=linux
minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1
minikube.k8s.io/name=functional-20220601175654-3412
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_06_01T17_58_39_0700
minikube.k8s.io/version=v1.26.0-beta.1
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 01 Jun 2022 17:58:34 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-20220601175654-3412
AcquireTime: <unset>
RenewTime: Wed, 01 Jun 2022 18:37:07 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 01 Jun 2022 18:36:39 +0000 Wed, 01 Jun 2022 17:58:29 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 01 Jun 2022 18:36:39 +0000 Wed, 01 Jun 2022 17:58:29 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 01 Jun 2022 18:36:39 +0000 Wed, 01 Jun 2022 17:58:29 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 01 Jun 2022 18:36:39 +0000 Wed, 01 Jun 2022 17:58:50 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-20220601175654-3412
Capacity:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
System Info:
Machine ID: e0d7477b601740b2a7c32c13851e505c
System UUID: e0d7477b601740b2a7c32c13851e505c
Boot ID: 3154680d-09d7-4698-9003-0db79e83a883
Kernel Version: 5.10.102.1-microsoft-standard-WSL2
OS Image: Ubuntu 20.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.16
Kubelet Version: v1.23.6
Kube-Proxy Version: v1.23.6
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-54fbb85-6c9nb 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 32m
default hello-node-connect-74cf8bc446-kpkgl 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 32m
default mysql-b87c45988-9rpl2 600m (3%!)(MISSING) 700m (4%!)(MISSING) 512Mi (0%!)(MISSING) 700Mi (1%!)(MISSING) 34m
default nginx-svc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 33m
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 32m
kube-system coredns-64897985d-jnnzd 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 38m
kube-system etcd-functional-20220601175654-3412 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-apiserver-functional-20220601175654-3412 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35m
kube-system kube-controller-manager-functional-20220601175654-3412 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-proxy-6vsfj 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-scheduler-functional-20220601175654-3412 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (8%!)(MISSING) 700m (4%!)(MISSING)
memory 682Mi (1%!)(MISSING) 870Mi (1%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 35m kube-proxy
Normal Starting 38m kube-proxy
Normal NodeHasNoDiskPressure 38m (x5 over 38m) kubelet Node functional-20220601175654-3412 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 38m (x5 over 38m) kubelet Node functional-20220601175654-3412 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 38m (x6 over 38m) kubelet Node functional-20220601175654-3412 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 38m kubelet Node functional-20220601175654-3412 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 38m kubelet Node functional-20220601175654-3412 status is now: NodeHasSufficientPID
Normal NodeNotReady 38m kubelet Node functional-20220601175654-3412 status is now: NodeNotReady
Normal NodeHasSufficientMemory 38m kubelet Node functional-20220601175654-3412 status is now: NodeHasSufficientMemory
Normal Starting 38m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 38m kubelet Updated Node Allocatable limit across pods
Normal NodeReady 38m kubelet Node functional-20220601175654-3412 status is now: NodeReady
Normal Starting 35m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 35m (x3 over 35m) kubelet Node functional-20220601175654-3412 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 35m (x3 over 35m) kubelet Node functional-20220601175654-3412 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 35m (x3 over 35m) kubelet Node functional-20220601175654-3412 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 35m kubelet Updated Node Allocatable limit across pods
*
* ==> dmesg <==
* [Jun 1 18:11] WSL2: Performing memory compaction.
[Jun 1 18:12] WSL2: Performing memory compaction.
[Jun 1 18:13] WSL2: Performing memory compaction.
[Jun 1 18:14] WSL2: Performing memory compaction.
[Jun 1 18:15] WSL2: Performing memory compaction.
[Jun 1 18:16] WSL2: Performing memory compaction.
[Jun 1 18:17] WSL2: Performing memory compaction.
[Jun 1 18:18] WSL2: Performing memory compaction.
[Jun 1 18:19] WSL2: Performing memory compaction.
[Jun 1 18:20] WSL2: Performing memory compaction.
[Jun 1 18:21] WSL2: Performing memory compaction.
[Jun 1 18:22] WSL2: Performing memory compaction.
[Jun 1 18:23] WSL2: Performing memory compaction.
[Jun 1 18:24] WSL2: Performing memory compaction.
[Jun 1 18:25] WSL2: Performing memory compaction.
[Jun 1 18:27] WSL2: Performing memory compaction.
[Jun 1 18:28] WSL2: Performing memory compaction.
[Jun 1 18:29] WSL2: Performing memory compaction.
[Jun 1 18:30] WSL2: Performing memory compaction.
[Jun 1 18:31] WSL2: Performing memory compaction.
[Jun 1 18:32] WSL2: Performing memory compaction.
[Jun 1 18:33] WSL2: Performing memory compaction.
[Jun 1 18:34] WSL2: Performing memory compaction.
[Jun 1 18:35] WSL2: Performing memory compaction.
[Jun 1 18:36] WSL2: Performing memory compaction.
*
* ==> etcd [26e97c628a45] <==
* {"level":"info","ts":"2022-06-01T18:04:13.935Z","caller":"traceutil/trace.go:171","msg":"trace[254918965] transaction","detail":"{read_only:false; response_revision:806; number_of_response:1; }","duration":"869.3601ms","start":"2022-06-01T18:04:13.065Z","end":"2022-06-01T18:04:13.935Z","steps":["trace[254918965] 'process raft request' (duration: 769.1438ms)","trace[254918965] 'compare' (duration: 99.8969ms)"],"step_count":2}
{"level":"warn","ts":"2022-06-01T18:04:13.935Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T18:04:13.065Z","time spent":"869.7186ms","remote":"127.0.0.1:45898","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:798 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128013403777340701 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >"}
{"level":"warn","ts":"2022-06-01T18:04:20.921Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"374.1672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/nginx-svc\" ","response":"range_response_count:1 size:1128"}
{"level":"warn","ts":"2022-06-01T18:04:20.922Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"466.248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-06-01T18:04:20.922Z","caller":"traceutil/trace.go:171","msg":"trace[1298226982] range","detail":"{range_begin:/registry/services/specs/default/nginx-svc; range_end:; response_count:1; response_revision:812; }","duration":"374.327ms","start":"2022-06-01T18:04:20.547Z","end":"2022-06-01T18:04:20.922Z","steps":["trace[1298226982] 'range keys from in-memory index tree' (duration: 374.0663ms)"],"step_count":1}
{"level":"info","ts":"2022-06-01T18:04:20.922Z","caller":"traceutil/trace.go:171","msg":"trace[30369322] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:812; }","duration":"466.2916ms","start":"2022-06-01T18:04:20.455Z","end":"2022-06-01T18:04:20.922Z","steps":["trace[30369322] 'range keys from in-memory index tree' (duration: 465.7698ms)"],"step_count":1}
{"level":"warn","ts":"2022-06-01T18:04:20.922Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T18:04:20.455Z","time spent":"466.3354ms","remote":"127.0.0.1:45992","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
{"level":"warn","ts":"2022-06-01T18:04:20.922Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T18:04:20.547Z","time spent":"374.3843ms","remote":"127.0.0.1:45954","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":1152,"request content":"key:\"/registry/services/specs/default/nginx-svc\" "}
{"level":"warn","ts":"2022-06-01T18:04:20.922Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"250.7619ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/default/\" range_end:\"/registry/resourcequotas/default0\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-06-01T18:04:20.922Z","caller":"traceutil/trace.go:171","msg":"trace[953612809] range","detail":"{range_begin:/registry/resourcequotas/default/; range_end:/registry/resourcequotas/default0; response_count:0; response_revision:812; }","duration":"250.8126ms","start":"2022-06-01T18:04:20.671Z","end":"2022-06-01T18:04:20.922Z","steps":["trace[953612809] 'range keys from in-memory index tree' (duration: 250.589ms)"],"step_count":1}
{"level":"info","ts":"2022-06-01T18:04:21.086Z","caller":"traceutil/trace.go:171","msg":"trace[102993881] transaction","detail":"{read_only:false; response_revision:814; number_of_response:1; }","duration":"130.1139ms","start":"2022-06-01T18:04:20.956Z","end":"2022-06-01T18:04:21.086Z","steps":["trace[102993881] 'process raft request' (duration: 115.8874ms)","trace[102993881] 'compare' (duration: 14.0267ms)"],"step_count":2}
{"level":"warn","ts":"2022-06-01T18:04:21.557Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.0244ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-06-01T18:04:21.557Z","caller":"traceutil/trace.go:171","msg":"trace[187544958] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:831; }","duration":"104.1873ms","start":"2022-06-01T18:04:21.453Z","end":"2022-06-01T18:04:21.557Z","steps":["trace[187544958] 'agreement among raft nodes before linearized reading' (duration: 81.6503ms)","trace[187544958] 'range keys from in-memory index tree' (duration: 22.3404ms)"],"step_count":2}
{"level":"info","ts":"2022-06-01T18:11:26.892Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":975}
{"level":"info","ts":"2022-06-01T18:11:26.894Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":975,"took":"1.2789ms"}
{"level":"info","ts":"2022-06-01T18:16:26.922Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1186}
{"level":"info","ts":"2022-06-01T18:16:26.923Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1186,"took":"1.0149ms"}
{"level":"info","ts":"2022-06-01T18:21:26.966Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1395}
{"level":"info","ts":"2022-06-01T18:21:26.967Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1395,"took":"581.5µs"}
{"level":"info","ts":"2022-06-01T18:26:26.998Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1605}
{"level":"info","ts":"2022-06-01T18:26:26.999Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1605,"took":"631.8µs"}
{"level":"info","ts":"2022-06-01T18:31:27.029Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1816}
{"level":"info","ts":"2022-06-01T18:31:27.030Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1816,"took":"651.9µs"}
{"level":"info","ts":"2022-06-01T18:36:27.056Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2024}
{"level":"info","ts":"2022-06-01T18:36:27.057Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2024,"took":"627.8µs"}
*
* ==> etcd [c85d97d43a30] <==
* {"level":"info","ts":"2022-06-01T17:58:30.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2022-06-01T17:58:30.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2022-06-01T17:58:30.037Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-01T17:58:30.041Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-01T17:58:30.041Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-01T17:58:30.041Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-01T17:58:30.041Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220601175654-3412 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2022-06-01T17:58:30.041Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-01T17:58:30.042Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-01T17:58:30.042Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
{"level":"info","ts":"2022-06-01T17:58:30.043Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-06-01T17:58:30.043Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-06-01T17:58:30.044Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
{"level":"warn","ts":"2022-06-01T17:58:52.436Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.1306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3074"}
{"level":"info","ts":"2022-06-01T17:58:52.436Z","caller":"traceutil/trace.go:171","msg":"trace[2006106820] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:396; }","duration":"101.3657ms","start":"2022-06-01T17:58:52.335Z","end":"2022-06-01T17:58:52.436Z","steps":["trace[2006106820] 'agreement among raft nodes before linearized reading' (duration: 100.9958ms)"],"step_count":1}
{"level":"warn","ts":"2022-06-01T17:58:53.322Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.2369ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-zhgqb\" ","response":"range_response_count:1 size:3472"}
{"level":"info","ts":"2022-06-01T17:58:53.322Z","caller":"traceutil/trace.go:171","msg":"trace[870259502] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-zhgqb; range_end:; response_count:1; response_revision:434; }","duration":"100.4721ms","start":"2022-06-01T17:58:53.221Z","end":"2022-06-01T17:58:53.322Z","steps":["trace[870259502] 'range keys from in-memory index tree' (duration: 100.1141ms)"],"step_count":1}
{"level":"info","ts":"2022-06-01T18:01:08.430Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-06-01T18:01:08.430Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220601175654-3412","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
WARNING: 2022/06/01 18:01:08 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/06/01 18:01:08 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-06-01T18:01:08.527Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2022-06-01T18:01:08.633Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-06-01T18:01:08.635Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-06-01T18:01:08.635Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220601175654-3412","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
*
* ==> kernel <==
* 18:37:09 up 58 min, 0 users, load average: 0.27, 0.27, 0.37
Linux functional-20220601175654-3412 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.4 LTS"
*
* ==> kube-apiserver [3cb91cda9605] <==
* I0601 18:01:23.667946 1 server.go:565] external host was not specified, using 192.168.49.2
I0601 18:01:23.669104 1 server.go:172] Version: v1.23.6
E0601 18:01:23.669749 1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
*
* ==> kube-apiserver [ce413f7f994b] <==
* I0601 18:02:41.250133 1 controller.go:611] quota admission added evaluator for: replicasets.apps
I0601 18:02:41.548927 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I0601 18:02:57.328181 1 trace.go:205] Trace[1506951188]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (01-Jun-2022 18:02:56.328) (total time: 999ms):
Trace[1506951188]: [999.2698ms] [999.2698ms] END
I0601 18:02:57.329058 1 trace.go:205] Trace[1480288553]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:f3bd5c24-4c59-4397-90f6-e54ae86deb0b,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (01-Jun-2022 18:02:56.328) (total time: 1000ms):
Trace[1480288553]: ---"Listing from storage done" 999ms (18:02:57.328)
Trace[1480288553]: [1.0002692s] [1.0002692s] END
{"level":"warn","ts":"2022-06-01T18:03:20.442Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0021ca1c0/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
{"level":"warn","ts":"2022-06-01T18:03:20.451Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0021ca1c0/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
I0601 18:03:20.552152 1 trace.go:205] Trace[1465098381]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:d3639560-931e-4eb1-b10b-67e20a9b8cbd,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (01-Jun-2022 18:03:18.758) (total time: 1793ms):
Trace[1465098381]: ---"About to write a response" 1793ms (18:03:20.551)
Trace[1465098381]: [1.7933127s] [1.7933127s] END
I0601 18:03:20.552419 1 trace.go:205] Trace[1519418264]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (01-Jun-2022 18:03:18.333) (total time: 2219ms):
Trace[1519418264]: [2.2191318s] [2.2191318s] END
I0601 18:03:20.553198 1 trace.go:205] Trace[237103339]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:3b310fa1-aa4d-4309-a7ec-5d1ae8cbb15f,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (01-Jun-2022 18:03:18.333) (total time: 2219ms):
Trace[237103339]: ---"Listing from storage done" 2219ms (18:03:20.552)
Trace[237103339]: [2.2199462s] [2.2199462s] END
I0601 18:03:48.854316 1 alloc.go:329] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.108.73.94]
I0601 18:04:13.936263 1 trace.go:205] Trace[630384542]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (01-Jun-2022 18:04:13.061) (total time: 874ms):
Trace[630384542]: ---"Transaction committed" 870ms (18:04:13.936)
Trace[630384542]: [874.9458ms] [874.9458ms] END
I0601 18:04:21.409539 1 alloc.go:329] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.98.95.14]
I0601 18:04:40.093795 1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.100.28.255]
W0601 18:17:49.374786 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
W0601 18:35:57.787024 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
*
* ==> kube-controller-manager [2efa890199e8] <==
* I0601 18:01:44.328872 1 shared_informer.go:247] Caches are synced for deployment
I0601 18:01:44.329416 1 shared_informer.go:247] Caches are synced for ReplicaSet
I0601 18:01:44.329500 1 shared_informer.go:247] Caches are synced for TTL after finished
I0601 18:01:44.329420 1 shared_informer.go:247] Caches are synced for endpoint
I0601 18:01:44.343797 1 shared_informer.go:247] Caches are synced for HPA
I0601 18:01:44.344678 1 shared_informer.go:247] Caches are synced for stateful set
I0601 18:01:44.344684 1 shared_informer.go:247] Caches are synced for attach detach
I0601 18:01:44.429940 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0601 18:01:44.430250 1 shared_informer.go:247] Caches are synced for persistent volume
I0601 18:01:44.444052 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I0601 18:01:44.527768 1 shared_informer.go:247] Caches are synced for disruption
I0601 18:01:44.527908 1 disruption.go:371] Sending events to api server.
I0601 18:01:44.528066 1 shared_informer.go:247] Caches are synced for resource quota
I0601 18:01:44.528312 1 shared_informer.go:247] Caches are synced for resource quota
I0601 18:01:44.830145 1 shared_informer.go:247] Caches are synced for garbage collector
I0601 18:01:44.866567 1 shared_informer.go:247] Caches are synced for garbage collector
I0601 18:01:44.866668 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0601 18:02:41.260761 1 event.go:294] "Event occurred" object="default/mysql" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-b87c45988 to 1"
I0601 18:02:41.531793 1 event.go:294] "Event occurred" object="default/mysql-b87c45988" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-b87c45988-9rpl2"
I0601 18:04:01.891080 1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0601 18:04:01.891230 1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0601 18:04:21.089469 1 event.go:294] "Event occurred" object="default/hello-node-connect" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-74cf8bc446 to 1"
I0601 18:04:21.123807 1 event.go:294] "Event occurred" object="default/hello-node-connect-74cf8bc446" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-74cf8bc446-kpkgl"
I0601 18:04:39.781810 1 event.go:294] "Event occurred" object="default/hello-node" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-54fbb85 to 1"
I0601 18:04:39.785742 1 event.go:294] "Event occurred" object="default/hello-node-54fbb85" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-54fbb85-6c9nb"
*
* ==> kube-controller-manager [34197b3df9eb] <==
* /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:157 +0x9e
k8s.io/kubernetes/pkg/controller/serviceaccount.(*TokensController).syncSecret(0xc000e24a20)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/serviceaccount/tokens_controller.go:268 +0x53
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000ba9f00)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x67
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x2a236e6070e5dd3a, {0x4d500a0, 0xc000fe8690}, 0x1, 0x0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xd7ddf9310e7bd0be, 0x0, 0x0, 0xde, 0xec8a4e1ac4b49010)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x697e1aa9b60ab056, 0x814df45ccd3b02de, 0x2589b81591c6c8b9)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/kubernetes/pkg/controller/serviceaccount.(*TokensController).Run
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/serviceaccount/tokens_controller.go:180 +0x245
goroutine 355 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:301 +0x77
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:300 +0xc8
goroutine 356 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:708 +0x1c9
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:691 +0xcf
*
* ==> kube-proxy [54088515f3b1] <==
* E0601 17:58:56.221181 1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
I0601 17:58:56.228048 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I0601 17:58:56.231778 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I0601 17:58:56.234793 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0601 17:58:56.237880 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0601 17:58:56.241003 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
I0601 17:58:56.525646 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I0601 17:58:56.525776 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I0601 17:58:56.525840 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0601 17:58:56.825135 1 server_others.go:206] "Using iptables Proxier"
I0601 17:58:56.825269 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0601 17:58:56.825286 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0601 17:58:56.825334 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0601 17:58:56.826191 1 server.go:656] "Version info" version="v1.23.6"
I0601 17:58:56.827267 1 config.go:317] "Starting service config controller"
I0601 17:58:56.827424 1 shared_informer.go:240] Waiting for caches to sync for service config
I0601 17:58:56.827779 1 config.go:226] "Starting endpoint slice config controller"
I0601 17:58:56.827939 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0601 17:58:56.928822 1 shared_informer.go:247] Caches are synced for service config
I0601 17:58:56.929061 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-proxy [9ee2f9d1ae9d] <==
* E0601 18:01:13.043339 1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
I0601 18:01:13.046778 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I0601 18:01:13.049310 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I0601 18:01:13.126889 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0601 18:01:13.133157 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0601 18:01:13.137532 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
E0601 18:01:13.141559 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220601175654-3412": dial tcp 192.168.49.2:8441: connect: connection refused
E0601 18:01:14.311819 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220601175654-3412": dial tcp 192.168.49.2:8441: connect: connection refused
I0601 18:01:21.335488 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I0601 18:01:21.335602 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I0601 18:01:21.335634 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0601 18:01:21.735872 1 server_others.go:206] "Using iptables Proxier"
I0601 18:01:21.736032 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0601 18:01:21.736046 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0601 18:01:21.736076 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0601 18:01:21.737101 1 server.go:656] "Version info" version="v1.23.6"
I0601 18:01:21.741418 1 config.go:226] "Starting endpoint slice config controller"
I0601 18:01:21.741861 1 config.go:317] "Starting service config controller"
I0601 18:01:21.742573 1 shared_informer.go:240] Waiting for caches to sync for service config
I0601 18:01:21.742482 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0601 18:01:21.844649 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0601 18:01:21.844889 1 shared_informer.go:247] Caches are synced for service config
*
* ==> kube-scheduler [1d64c64b4d63] <==
* E0601 17:58:36.087333 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0601 17:58:36.092752 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0601 17:58:36.092949 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0601 17:58:36.117662 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0601 17:58:36.117770 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0601 17:58:36.121357 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0601 17:58:36.121463 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0601 17:58:36.138367 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0601 17:58:36.138484 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0601 17:58:36.317903 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0601 17:58:36.318014 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0601 17:58:36.418580 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0601 17:58:36.418720 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0601 17:58:36.528358 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0601 17:58:36.528508 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0601 17:58:36.558910 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0601 17:58:36.559020 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0601 17:58:36.617994 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0601 17:58:36.618094 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0601 17:58:38.104563 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0601 17:58:38.104715 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
I0601 17:58:39.034025 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0601 18:01:08.534747 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0601 18:01:08.535086 1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
I0601 18:01:08.535167 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
*
* ==> kube-scheduler [f8476e9b4b72] <==
* I0601 18:01:14.928715 1 serving.go:348] Generated self-signed cert in-memory
W0601 18:01:21.326767 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0601 18:01:21.326805 1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
W0601 18:01:21.326825 1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
W0601 18:01:21.326838 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0601 18:01:21.526318 1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
I0601 18:01:21.529110 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
I0601 18:01:21.529720 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0601 18:01:21.529741 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0601 18:01:21.529777 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0601 18:01:21.629987 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0601 18:01:31.233478 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
E0601 18:01:31.233680 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
E0601 18:01:31.233744 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
E0601 18:01:31.233814 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
E0601 18:01:31.233883 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
E0601 18:01:31.328063 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
*
* ==> kubelet <==
* -- Logs begin at Wed 2022-06-01 17:57:49 UTC, end at Wed 2022-06-01 18:37:10 UTC. --
Jun 01 18:04:32 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:32.158468 6101 reconciler.go:300] "Volume detached for volume \"kube-api-access-5c74l\" (UniqueName: \"kubernetes.io/projected/3a5da3c5-8277-4bc6-b783-7051fd58f871-kube-api-access-5c74l\") on node \"functional-20220601175654-3412\" DevicePath \"\""
Jun 01 18:04:32 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:32.158597 6101 reconciler.go:300] "Volume detached for volume \"pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9\" (UniqueName: \"kubernetes.io/host-path/3a5da3c5-8277-4bc6-b783-7051fd58f871-pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9\") on node \"functional-20220601175654-3412\" DevicePath \"\""
Jun 01 18:04:32 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:32.959523 6101 scope.go:110] "RemoveContainer" containerID="428f2721d941ee9d29bee164b4a1e72f74826bea609f3fd8f3b28943beaba0f5"
Jun 01 18:04:33 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:33.607897 6101 topology_manager.go:200] "Topology Admit Handler"
Jun 01 18:04:33 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:33.837277 6101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw9kw\" (UniqueName: \"kubernetes.io/projected/19b42d3f-c937-4f5e-8ce0-b0d8533ad3bd-kube-api-access-hw9kw\") pod \"sp-pod\" (UID: \"19b42d3f-c937-4f5e-8ce0-b0d8533ad3bd\") " pod="default/sp-pod"
Jun 01 18:04:33 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:33.837482 6101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9\" (UniqueName: \"kubernetes.io/host-path/19b42d3f-c937-4f5e-8ce0-b0d8533ad3bd-pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9\") pod \"sp-pod\" (UID: \"19b42d3f-c937-4f5e-8ce0-b0d8533ad3bd\") " pod="default/sp-pod"
Jun 01 18:04:33 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:33.974227 6101 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-connect-74cf8bc446-kpkgl through plugin: invalid network status for"
Jun 01 18:04:34 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:34.535202 6101 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=3a5da3c5-8277-4bc6-b783-7051fd58f871 path="/var/lib/kubelet/pods/3a5da3c5-8277-4bc6-b783-7051fd58f871/volumes"
Jun 01 18:04:34 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:34.965148 6101 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
Jun 01 18:04:35 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:35.009102 6101 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
Jun 01 18:04:36 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:36.026458 6101 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
Jun 01 18:04:37 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:37.232507 6101 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
Jun 01 18:04:39 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:39.797741 6101 topology_manager.go:200] "Topology Admit Handler"
Jun 01 18:04:39 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:39.891576 6101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vcx9\" (UniqueName: \"kubernetes.io/projected/6dff7187-bcdf-4179-b4f5-61f1663b106c-kube-api-access-2vcx9\") pod \"hello-node-54fbb85-6c9nb\" (UID: \"6dff7187-bcdf-4179-b4f5-61f1663b106c\") " pod="default/hello-node-54fbb85-6c9nb"
Jun 01 18:04:40 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:40.936762 6101 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-54fbb85-6c9nb through plugin: invalid network status for"
Jun 01 18:04:40 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:40.936948 6101 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2af19e9c1aff9c35f54f45dbee58eb8b57601e3f4e136cca4a9d4f5b1d525992"
Jun 01 18:04:41 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:41.953808 6101 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-54fbb85-6c9nb through plugin: invalid network status for"
Jun 01 18:05:21 functional-20220601175654-3412 kubelet[6101]: E0601 18:05:21.026121 6101 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/5dd22e46ce26d76ce53ed70eb816f68071a75adce2dd558df7d59cf62c541102/diff" to get inode usage: stat /var/lib/docker/overlay2/5dd22e46ce26d76ce53ed70eb816f68071a75adce2dd558df7d59cf62c541102/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/e2ab857ff931d067ffedd71ba632df14981b57ce80d29351a49991e38c08c79c" to get inode usage: stat /var/lib/docker/containers/e2ab857ff931d067ffedd71ba632df14981b57ce80d29351a49991e38c08c79c: no such file or directory
Jun 01 18:06:20 functional-20220601175654-3412 kubelet[6101]: W0601 18:06:20.990761 6101 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 01 18:11:21 functional-20220601175654-3412 kubelet[6101]: W0601 18:11:21.004791 6101 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 01 18:16:21 functional-20220601175654-3412 kubelet[6101]: W0601 18:16:21.021117 6101 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 01 18:21:21 functional-20220601175654-3412 kubelet[6101]: W0601 18:21:21.036896 6101 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 01 18:26:21 functional-20220601175654-3412 kubelet[6101]: W0601 18:26:21.053184 6101 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 01 18:31:21 functional-20220601175654-3412 kubelet[6101]: W0601 18:31:21.069505 6101 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 01 18:36:21 functional-20220601175654-3412 kubelet[6101]: W0601 18:36:21.083555 6101 sysinfo.go:203] Nodes topology is not available, providing CPU topology
*
* ==> storage-provisioner [249f3b6cebd0] <==
* I0601 18:01:25.030891 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0601 18:01:25.041225 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
*
* ==> storage-provisioner [e059ac677a6c] <==
* I0601 18:01:42.849361 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0601 18:01:42.952392 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0601 18:01:42.952590 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0601 18:02:00.541982 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0601 18:02:00.542191 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"31fc7c77-817a-49e9-98d6-e90848c88c5b", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220601175654-3412_99bde5bd-9ca0-41b5-8e78-2ef4bc83d1fd became leader
I0601 18:02:00.542369 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220601175654-3412_99bde5bd-9ca0-41b5-8e78-2ef4bc83d1fd!
I0601 18:02:00.643401 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220601175654-3412_99bde5bd-9ca0-41b5-8e78-2ef4bc83d1fd!
I0601 18:04:01.891883 1 controller.go:1332] provision "default/myclaim" class "standard": started
I0601 18:04:01.892161 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard 45b79934-94bb-464c-ae51-897b26c8a5cb 463 0 2022-06-01 17:58:59 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-06-01 17:58:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9 &PersistentVolumeClaim{ObjectMeta:{myclaim default 2cd87724-3bba-4cab-b1a2-a68496ffc9e9 785 0 2022-06-01 18:04:01 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2022-06-01 18:04:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2022-06-01 18:04:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I0601 18:04:01.893035 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"2cd87724-3bba-4cab-b1a2-a68496ffc9e9", APIVersion:"v1", ResourceVersion:"785", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I0601 18:04:01.893505 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9" provisioned
I0601 18:04:01.893653 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I0601 18:04:01.893667 1 volume_store.go:212] Trying to save persistentvolume "pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9"
I0601 18:04:01.940106 1 volume_store.go:219] persistentvolume "pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9" saved
I0601 18:04:01.940597 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"2cd87724-3bba-4cab-b1a2-a68496ffc9e9", APIVersion:"v1", ResourceVersion:"785", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220601175654-3412 -n functional-20220601175654-3412
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220601175654-3412 -n functional-20220601175654-3412: (6.3847037s)
helpers_test.go:261: (dbg) Run: kubectl --context functional-20220601175654-3412 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context functional-20220601175654-3412 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-20220601175654-3412 describe pod : exit status 1 (228.9343ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context functional-20220601175654-3412 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (1958.41s)