=== RUN TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run: kubectl --context functional-20220801230127-2732 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run: kubectl --context functional-20220801230127-2732 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54c4b5c49f-jwsmk" [1b2833b0-83ed-4226-9fb4-84773584d350] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
=== CONT TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54c4b5c49f-jwsmk" [1b2833b0-83ed-4226-9fb4-84773584d350] Running
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 39.0890602s
functional_test.go:1448: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220801230127-2732 service list
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220801230127-2732 service list: (3.7899861s)
functional_test.go:1462: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220801230127-2732 service --namespace=default --https --url hello-node
functional_test.go:1391: Failed to sent interrupt to proc not supported by windows
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220801230127-2732 service --namespace=default --https --url hello-node: exit status 1 (34m19.8729234s)
-- stdout --
https://127.0.0.1:63992
-- /stdout --
** stderr **
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
** /stderr **
functional_test.go:1464: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-20220801230127-2732 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1401: service test failed - dumping debug information
functional_test.go:1402: -----------------------service failure post-mortem--------------------------------
functional_test.go:1405: (dbg) Run: kubectl --context functional-20220801230127-2732 describe po hello-node
functional_test.go:1409: hello-node pod describe:
Name: hello-node-54c4b5c49f-jwsmk
Namespace: default
Priority: 0
Node: functional-20220801230127-2732/192.168.49.2
Start Time: Mon, 01 Aug 2022 23:06:23 +0000
Labels: app=hello-node
pod-template-hash=54c4b5c49f
Annotations: <none>
Status: Running
IP: 172.17.0.6
IPs:
IP: 172.17.0.6
Controlled By: ReplicaSet/hello-node-54c4b5c49f
Containers:
echoserver:
Container ID: docker://74d95f83d7c60fcb7cb2ac9780bbd75500ad22317594f81da15452558b09bf1f
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 01 Aug 2022 23:06:56 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6js9r (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-6js9r:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-54c4b5c49f-jwsmk to functional-20220801230127-2732
Normal Pulling 35m kubelet, functional-20220801230127-2732 Pulling image "k8s.gcr.io/echoserver:1.8"
Normal Pulled 34m kubelet, functional-20220801230127-2732 Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 27.8812142s
Normal Created 34m kubelet, functional-20220801230127-2732 Created container echoserver
Normal Started 34m kubelet, functional-20220801230127-2732 Started container echoserver
Name: hello-node-connect-578cdc45cb-cx7zl
Namespace: default
Priority: 0
Node: functional-20220801230127-2732/192.168.49.2
Start Time: Mon, 01 Aug 2022 23:06:22 +0000
Labels: app=hello-node-connect
pod-template-hash=578cdc45cb
Annotations: <none>
Status: Running
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/hello-node-connect-578cdc45cb
Containers:
echoserver:
Container ID: docker://09fcbccdd96505f56ae4a45fd56f8b21378806c9c7ac1d212527c0e74b7d7f95
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 01 Aug 2022 23:06:55 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-49n9t (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-49n9t:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-connect-578cdc45cb-cx7zl to functional-20220801230127-2732
Normal Pulling 35m kubelet, functional-20220801230127-2732 Pulling image "k8s.gcr.io/echoserver:1.8"
Normal Pulled 34m kubelet, functional-20220801230127-2732 Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 29.7381411s
Normal Created 34m kubelet, functional-20220801230127-2732 Created container echoserver
Normal Started 34m kubelet, functional-20220801230127-2732 Started container echoserver
functional_test.go:1411: (dbg) Run: kubectl --context functional-20220801230127-2732 logs -l app=hello-node
functional_test.go:1415: hello-node logs:
functional_test.go:1417: (dbg) Run: kubectl --context functional-20220801230127-2732 describe svc hello-node
functional_test.go:1421: hello-node svc describe:
Name: hello-node
Namespace: default
Labels: app=hello-node
Annotations: <none>
Selector: app=hello-node
Type: NodePort
IP: 10.100.19.40
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 32262/TCP
Endpoints: 172.17.0.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect functional-20220801230127-2732
helpers_test.go:235: (dbg) docker inspect functional-20220801230127-2732:
-- stdout --
[
{
"Id": "0616180a3fb9991e1896bee9bba45bb9adcaf03c432d6add2483b651cb37323d",
"Created": "2022-08-01T23:02:09.3085887Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 26276,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-08-01T23:02:10.2756245Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
"ResolvConfPath": "/var/lib/docker/containers/0616180a3fb9991e1896bee9bba45bb9adcaf03c432d6add2483b651cb37323d/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/0616180a3fb9991e1896bee9bba45bb9adcaf03c432d6add2483b651cb37323d/hostname",
"HostsPath": "/var/lib/docker/containers/0616180a3fb9991e1896bee9bba45bb9adcaf03c432d6add2483b651cb37323d/hosts",
"LogPath": "/var/lib/docker/containers/0616180a3fb9991e1896bee9bba45bb9adcaf03c432d6add2483b651cb37323d/0616180a3fb9991e1896bee9bba45bb9adcaf03c432d6add2483b651cb37323d-json.log",
"Name": "/functional-20220801230127-2732",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-20220801230127-2732:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-20220801230127-2732",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4194304000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/b499024b2953a6bd7afdfb55a8274190ac7eb31414d270449a766a57a0df148d-init/diff:/var/lib/docker/overlay2/442cec4154f2ce13f10c5d710cd8854c2c3e23fa1e256f6fc815ea0dc1093d67/diff:/var/lib/docker/overlay2/a7e5ef9bb7e7a107b3ce12ad21b2fa856d8e247666415b52c04d94b7c230c845/diff:/var/lib/docker/overlay2/d20cba56264cf9d8d506566a739fa4fd0cb14d6fe6f47ebf4f8cb78d018dd04d/diff:/var/lib/docker/overlay2/f1696b6759801922cdfea8785daf0c5ba959fc308dc7ac12138355306d3148f6/diff:/var/lib/docker/overlay2/ed2e334665fa5d7f87eb393c1ac5e2259752b10903573d80d0901397849497de/diff:/var/lib/docker/overlay2/d929b6a51c381eb011584d9b54d23b44f942d2f95c659bc2d0a0e1236b9dd9e5/diff:/var/lib/docker/overlay2/b810fc269a1919ac238496b6e7173121b2116c0a85f4805f3e6ec20214248bb0/diff:/var/lib/docker/overlay2/d8002b680428cd6001f33fd563edc91070da353b1bd6affcafef51efb0751b6e/diff:/var/lib/docker/overlay2/e41ef416a33e42cef250e47b9170a0fd0a5804d3c2e33091f26acf36e98d5e08/diff:/var/lib/docker/overlay2/685494
57ab3f1dfb6d1582df92dec9f202e509ab28ae6a10e2d95aa9527de239/diff:/var/lib/docker/overlay2/044bed83e223abde3966dc5c346a17e00b315dd38d4a729716fc16c5a0992056/diff:/var/lib/docker/overlay2/045652d39aa51b4858883985c0c154c4de1f37324f039cfe013bbccc87310a3e/diff:/var/lib/docker/overlay2/171262836c0b2fb67b382705460e90334ff734b0980fb85959c5467a07abae65/diff:/var/lib/docker/overlay2/e72f9e78c803b1603a052fd49bb1c3fef9e9b937ea5aef08e91c41f7c3f4468d/diff:/var/lib/docker/overlay2/9474d862542c6887303a6491f738e0a09356f21303d8f551fdc6819ee0a176ae/diff:/var/lib/docker/overlay2/d6aebfde21745fcd72f8da91e1c5b439dbcfef99bd0aba730e78bc6f53d0fc8a/diff:/var/lib/docker/overlay2/b023a5ca809e3a88b26692072a6c88eb740a31cb49d18f21a02aed4032d62357/diff:/var/lib/docker/overlay2/75e46426d4687f0efed1a08fdb5c777575c3fe46b395c9d4b0e42392dd68efe1/diff:/var/lib/docker/overlay2/7e52c92dd38b8f928aeb1e6097727a23db64629b9cdc7fcfd323804c91149773/diff:/var/lib/docker/overlay2/457d546d58c370271660529c2dbb3929b450747f2e8bb6c070d9e86ba4f98249/diff:/var/lib/d
ocker/overlay2/97d8ffe0e39bc1c79ed0eebf722ff6596843a71e2c50b09b4fbff70959ddbdd6/diff:/var/lib/docker/overlay2/277177ac835f346d7e3434f5d34205b31c62cf91507a1630577317c6b5e3c339/diff:/var/lib/docker/overlay2/36bc99cf9e2a5cb057bc8d772fb4dbe15bef728a05874aec192a620a20654adf/diff:/var/lib/docker/overlay2/1a2dc9d2c88faaef1e259b7bbdb488985e76de2b1ff0cc23fd90d349c584b516/diff:/var/lib/docker/overlay2/ed564536c32c6941e1db545bf3f47722aec37544a8c4bae2cb39b2d68dc76d68/diff:/var/lib/docker/overlay2/0c2a69e5775e21b8c83f69cce9083277d8d5f86a3eb02cc96c13d5a9baf90500/diff:/var/lib/docker/overlay2/0c1f4e750f5460f9d44bd64b918888e7dabb2640b2efab4689c8ca159d4157ed/diff:/var/lib/docker/overlay2/d73a86b4eacb0fe30b7d3d951755c8d74a02651375c8ff9a9241ff9e7df8c335/diff:/var/lib/docker/overlay2/a1d77820fd6763a7ba158b4308fbfa1a0d3f8321933e58d2222690fefaa87196/diff:/var/lib/docker/overlay2/47c05c1a210d35229a9059e00c7acd8383440250620c1604720a8ab5947b5851/diff:/var/lib/docker/overlay2/e80e2d303fa19b65717730dace6055da2d4d4644962726fd308bfc71c84
34515/diff:/var/lib/docker/overlay2/29b601621e245edeeb2618b8a1640622e2e815e33d272c9f6a322e9fc0714656/diff:/var/lib/docker/overlay2/a6b95f6fa3fca2365c390c371586bc94465408fbd259076b96168ff4ac1d80e8/diff:/var/lib/docker/overlay2/ac998cdc29e9fd864ea46aa409702150821e584a0be6b068555a35751e123172/diff:/var/lib/docker/overlay2/f7d08cb39e2bd1a95016c6b6f22a530fdaa86937ec1fc6f55af4c6538e900387/diff:/var/lib/docker/overlay2/a374c21ccbf876b9c6ee91a0aea7d489acbb5d87012a8eb359ff4254bb13e9bd/diff:/var/lib/docker/overlay2/3b22b4940cd2e1521efd80d88cf2ff96c180c9b0d0036d74b3dcec476bb93a08/diff:/var/lib/docker/overlay2/1d182bd9eb32878f9120a74256c621b48f38075c3e2b85ff50ef92415e4f558f/diff:/var/lib/docker/overlay2/80e45d06f7a619861fb7369d3ed475a94994cc144bdced7dd7bdc1dbc6149c6c/diff:/var/lib/docker/overlay2/78d8555d8e439c764bc8c925cd36f2cab38684ff48135737f1fe775559a61a11/diff:/var/lib/docker/overlay2/3e5b6b44b906db13e650e45e10aff60ef73c3b569d05c68b30095e0e16ea00f9/diff",
"MergedDir": "/var/lib/docker/overlay2/b499024b2953a6bd7afdfb55a8274190ac7eb31414d270449a766a57a0df148d/merged",
"UpperDir": "/var/lib/docker/overlay2/b499024b2953a6bd7afdfb55a8274190ac7eb31414d270449a766a57a0df148d/diff",
"WorkDir": "/var/lib/docker/overlay2/b499024b2953a6bd7afdfb55a8274190ac7eb31414d270449a766a57a0df148d/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-20220801230127-2732",
"Source": "/var/lib/docker/volumes/functional-20220801230127-2732/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-20220801230127-2732",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-20220801230127-2732",
"name.minikube.sigs.k8s.io": "functional-20220801230127-2732",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "4252b3a5b07819a9d0c52acdcb43dff978bd3ef66119b0d41c6d70bbd67e8b6a",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "63691"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "63692"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "63693"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "63694"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "63695"
}
]
},
"SandboxKey": "/var/run/docker/netns/4252b3a5b078",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-20220801230127-2732": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"0616180a3fb9",
"functional-20220801230127-2732"
],
"NetworkID": "c68cf8476ce3a6fb0673cb21078590b791f5ec59b8d2a16c502aae80b5d6e74f",
"EndpointID": "f2529b39999ae1f95abc4958150fb2926795c6da925394b216d37855452b5bfb",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220801230127-2732 -n functional-20220801230127-2732
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220801230127-2732 -n functional-20220801230127-2732: (3.5870625s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220801230127-2732 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220801230127-2732 logs -n 25: (6.1820563s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs:
-- stdout --
*
* ==> Audit <==
* |----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|---------|---------------------|---------------------|
| start | -p | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:07 GMT | |
| | functional-20220801230127-2732 | | | | | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| service | functional-20220801230127-2732 | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:07 GMT | 01 Aug 22 23:07 GMT |
| | service list | | | | | |
| image | functional-20220801230127-2732 image load | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:07 GMT | 01 Aug 22 23:07 GMT |
| | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar | | | | | |
| start | -p | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:07 GMT | |
| | functional-20220801230127-2732 | | | | | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| service | functional-20220801230127-2732 | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:07 GMT | |
| | service --namespace=default | | | | | |
| | --https --url hello-node | | | | | |
| start | -p | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:07 GMT | |
| | functional-20220801230127-2732 | | | | | |
| | --dry-run --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| image | functional-20220801230127-2732 | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:07 GMT | 01 Aug 22 23:07 GMT |
| | image ls | | | | | |
| dashboard | --url --port 36195 -p | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:07 GMT | |
| | functional-20220801230127-2732 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| image | functional-20220801230127-2732 image save --daemon | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:07 GMT | 01 Aug 22 23:07 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220801230127-2732 | | | | | |
| cp | functional-20220801230127-2732 | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:07 GMT | 01 Aug 22 23:07 GMT |
| | cp testdata\cp-test.txt | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | functional-20220801230127-2732 | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:07 GMT | 01 Aug 22 23:07 GMT |
| | ssh -n | | | | | |
| | functional-20220801230127-2732 | | | | | |
| | sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | functional-20220801230127-2732 cp functional-20220801230127-2732:/home/docker/cp-test.txt | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:07 GMT | 01 Aug 22 23:07 GMT |
| | C:\Users\jenkins.minikube8\AppData\Local\Temp\TestFunctionalparallelCpCmd2281844126\001\cp-test.txt | | | | | |
| ssh | functional-20220801230127-2732 | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:07 GMT | 01 Aug 22 23:07 GMT |
| | ssh -n | | | | | |
| | functional-20220801230127-2732 | | | | | |
| | sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| docker-env | functional-20220801230127-2732 | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:07 GMT | 01 Aug 22 23:07 GMT |
| | docker-env | | | | | |
| docker-env | functional-20220801230127-2732 | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:07 GMT | 01 Aug 22 23:07 GMT |
| | docker-env | | | | | |
| update-context | functional-20220801230127-2732 | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:07 GMT | 01 Aug 22 23:08 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-20220801230127-2732 | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:08 GMT | 01 Aug 22 23:08 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-20220801230127-2732 | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:08 GMT | 01 Aug 22 23:08 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| image | functional-20220801230127-2732 | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:08 GMT | 01 Aug 22 23:08 GMT |
| | image ls --format short | | | | | |
| image | functional-20220801230127-2732 | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:08 GMT | 01 Aug 22 23:08 GMT |
| | image ls --format yaml | | | | | |
| ssh | functional-20220801230127-2732 | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:08 GMT | |
| | ssh pgrep buildkitd | | | | | |
| image | functional-20220801230127-2732 image build -t | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:08 GMT | 01 Aug 22 23:08 GMT |
| | localhost/my-image:functional-20220801230127-2732 | | | | | |
| | testdata\build | | | | | |
| image | functional-20220801230127-2732 | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:08 GMT | 01 Aug 22 23:08 GMT |
| | image ls | | | | | |
| image | functional-20220801230127-2732 | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:08 GMT | 01 Aug 22 23:08 GMT |
| | image ls --format json | | | | | |
| image | functional-20220801230127-2732 | functional-20220801230127-2732 | minikube8\jenkins | v1.26.0 | 01 Aug 22 23:08 GMT | 01 Aug 22 23:08 GMT |
| | image ls --format table | | | | | |
|----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/08/01 23:07:10
Running on machine: minikube8
Binary: Built with gc go1.18.3 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0801 23:07:10.790620 12068 out.go:296] Setting OutFile to fd 736 ...
I0801 23:07:10.861202 12068 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0801 23:07:10.861202 12068 out.go:309] Setting ErrFile to fd 808...
I0801 23:07:10.861202 12068 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0801 23:07:10.886199 12068 out.go:303] Setting JSON to false
I0801 23:07:10.890199 12068 start.go:115] hostinfo: {"hostname":"minikube8","uptime":1787,"bootTime":1659393443,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
W0801 23:07:10.890199 12068 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0801 23:07:10.910255 12068 out.go:177] * [functional-20220801230127-2732] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
I0801 23:07:10.915231 12068 out.go:177] - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
I0801 23:07:10.918212 12068 out.go:177] - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
I0801 23:07:10.920200 12068 out.go:177] - MINIKUBE_LOCATION=14695
I0801 23:07:10.923206 12068 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0801 23:07:10.926210 12068 config.go:180] Loaded profile config "functional-20220801230127-2732": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
I0801 23:07:10.927223 12068 driver.go:365] Setting default libvirt URI to qemu:///system
I0801 23:07:13.418321 12068 docker.go:137] docker version: linux-20.10.17
I0801 23:07:13.425325 12068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0801 23:07:14.000717 12068 info.go:265] docker info: {ID:EHFC:YCAN:KYNN:D6SR:ANQE:WOCA:2TFV:AH4L:QH5G:SCCM:DZFI:3DVV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-08-01 23:07:13.6025982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-p
lugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0801 23:07:14.006715 12068 out.go:177] * Using the docker driver based on existing profile
I0801 23:07:14.008723 12068 start.go:284] selected driver: docker
I0801 23:07:14.008723 12068 start.go:808] validating driver "docker" against &{Name:functional-20220801230127-2732 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:functional-20220801230127-2732 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-polic
y:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0801 23:07:14.009675 12068 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0801 23:07:14.022723 12068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0801 23:07:14.583320 12068 info.go:265] docker info: {ID:EHFC:YCAN:KYNN:D6SR:ANQE:WOCA:2TFV:AH4L:QH5G:SCCM:DZFI:3DVV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-08-01 23:07:14.1856709 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-p
lugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0801 23:07:14.713184 12068 cni.go:95] Creating CNI manager for ""
I0801 23:07:14.713184 12068 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0801 23:07:14.713184 12068 start_flags.go:310] config:
{Name:functional-20220801230127-2732 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:functional-20220801230127-2732 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:fal
se storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0801 23:07:14.719882 12068 out.go:177] * dry-run validation complete!
*
* ==> Docker <==
* -- Logs begin at Mon 2022-08-01 23:02:10 UTC, end at Mon 2022-08-01 23:41:36 UTC. --
Aug 01 23:05:08 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:08.950699600Z" level=info msg="ignoring event" container=929b673911032a5f97a062808b7508a91ee49a92789e060bf3cdbd8665730bc5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 23:05:08 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:08.951184700Z" level=info msg="ignoring event" container=59c7542f8efd8efb825d575fcff28b229650d07d83cb0f24f779f8a0bc9c7f5e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 23:05:08 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:08.951561300Z" level=info msg="ignoring event" container=950d133f572a4063f4332f3b8f97da9178a3d246fc2a40a23feafd5c20cb42ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 23:05:09 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:09.431924800Z" level=info msg="Removing stale sandbox 18e043b70c15643c964725783cefc675ec954f1653f1160a907633961e4a45da (59c7542f8efd8efb825d575fcff28b229650d07d83cb0f24f779f8a0bc9c7f5e)"
Aug 01 23:05:09 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:09.440299300Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 3023bd45c9de9cf7e0c75df1c4b0567f78d466f99264b23c62cdd5474035fecb 9e1b8cee70ca982ba6baa866f16e2dd3f41e7d64fe00f0b3eb58e6ae391db034], retrying...."
Aug 01 23:05:09 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:09.664355100Z" level=info msg="Removing stale sandbox 9c8f89abeb61ebea404e3c0ed38322f6b81d0f7d7fc764b9231f0a0c63ed8c09 (950d133f572a4063f4332f3b8f97da9178a3d246fc2a40a23feafd5c20cb42ab)"
Aug 01 23:05:09 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:09.671931300Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 3023bd45c9de9cf7e0c75df1c4b0567f78d466f99264b23c62cdd5474035fecb 6e62d4a6800cd04913e41ee692439d73b5f7f49e400ffaab835c41d74100175b], retrying...."
Aug 01 23:05:09 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:09.693675300Z" level=info msg="Fixing inconsistent endpoint_cnt for network host. Expected=0, Actual=1"
Aug 01 23:05:09 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:09.780932000Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 01 23:05:09 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:09.897540000Z" level=info msg="Loading containers: done."
Aug 01 23:05:09 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:09.971394700Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
Aug 01 23:05:09 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:09.971580700Z" level=info msg="Daemon has completed initialization"
Aug 01 23:05:10 functional-20220801230127-2732 systemd[1]: Started Docker Application Container Engine.
Aug 01 23:05:10 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:10.034742300Z" level=info msg="API listen on [::]:2376"
Aug 01 23:05:10 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:10.042701600Z" level=info msg="API listen on /var/run/docker.sock"
Aug 01 23:05:15 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:15.738196600Z" level=info msg="ignoring event" container=f2d3bfd619a46d953f767e1d9866765c70e77c4d33368727955c5b757ba51924 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 23:05:15 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:15.744118900Z" level=info msg="ignoring event" container=5ba5a37d22b878fe5abbfc652f23c6f6d010f8cf52cf4b288862dc4981d3a064 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 23:05:15 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:15.843270400Z" level=info msg="ignoring event" container=99ad15083b3baef5795f3769fe13fc2b70a97a23fe5bdc981e87e478079a516c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 23:05:15 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:15.843498300Z" level=info msg="ignoring event" container=dabafa815affa97e853b85b0c2849f62b7796616a1ddc4e01a6facccde7262dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 23:05:15 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:15.843554500Z" level=info msg="ignoring event" container=6ba044dfebdceaaff47c3b8913a8657b5f26705812d89274f61d3f38b433e7ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 23:05:15 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:05:15.929886400Z" level=info msg="ignoring event" container=7d262f5d5cee366f0ad1bc3b5b14bbe67434db1435d05699b4a08205cc5528d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 23:06:50 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:06:50.036207100Z" level=info msg="ignoring event" container=18496a7b523c3d72d8508a78e3c7898ac87fa21a2abeed5d816756c03cf94a19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 23:06:52 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:06:52.328985500Z" level=info msg="ignoring event" container=efc8f8c61dc30e2470b6184df86337d9452146c4e55f6d6bd1b13d92452638a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 23:08:28 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:08:28.529114200Z" level=info msg="ignoring event" container=2b4c49fbf4e418903b384886adc5d5ab9c9a18d14efb2632b2d56203b1088093 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 23:08:29 functional-20220801230127-2732 dockerd[8628]: time="2022-08-01T23:08:29.375115700Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
deeebd510143a mysql@sha256:b3a86578a582617214477d91e47e850f9e18df0b5d1644fb2d96d91a340b8972 33 minutes ago Running mysql 0 dd114a991d410
eb9796cfc37b7 nginx@sha256:bd06dfe1f8f7758debd49d3876023992d41842fd8921565aed315a678a309982 34 minutes ago Running myfrontend 0 96d702ac472af
74d95f83d7c60 k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 34 minutes ago Running echoserver 0 907ad7d715446
09fcbccdd9650 k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 34 minutes ago Running echoserver 0 d1c65be7eb593
063a2965cee11 nginx@sha256:9c2030e1ff2c3fef7440a7fb69475553e548b9685683bdbf669ac0829b889d5f 35 minutes ago Running nginx 0 af0dbfc3d9805
ecfd0fb692389 a4ca41631cc7a 36 minutes ago Running coredns 3 5a6b234068418
57453622dc32f 6e38f40d628db 36 minutes ago Running storage-provisioner 4 69fbfa6d271c3
4f96594e8744d 2ae1ba6417cbc 36 minutes ago Running kube-proxy 3 61e8b0e490140
4e16bf1812d15 d521dd763e2e3 36 minutes ago Running kube-apiserver 0 2a237ea6b5f39
839c2c5364fd4 3a5aa3a515f5d 36 minutes ago Running kube-scheduler 3 c934cbef2be4a
a4d33a5b3f4af 586c112956dfc 36 minutes ago Running kube-controller-manager 3 2d5aa7d4db72d
e10bcdcd1e248 aebe758cef4cd 36 minutes ago Running etcd 3 7583d481c8601
929b673911032 3a5aa3a515f5d 36 minutes ago Exited kube-scheduler 2 950d133f572a4
02fa1005162d4 6e38f40d628db 37 minutes ago Exited storage-provisioner 3 8678b613c7d3d
b88ce2e3db64a a4ca41631cc7a 37 minutes ago Exited coredns 2 e4983882e386d
197cbc6d9d4ef 586c112956dfc 37 minutes ago Exited kube-controller-manager 2 26d5d7d781051
d2185e3d3dd7c aebe758cef4cd 37 minutes ago Exited etcd 2 af8ae86975bc5
823e9f99bb4c0 2ae1ba6417cbc 37 minutes ago Exited kube-proxy 2 4636d388bad58
*
* ==> coredns [b88ce2e3db64] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> coredns [ecfd0fb69238] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> describe nodes <==
* Name: functional-20220801230127-2732
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-20220801230127-2732
kubernetes.io/os=linux
minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93
minikube.k8s.io/name=functional-20220801230127-2732
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_08_01T23_02_47_0700
minikube.k8s.io/version=v1.26.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 01 Aug 2022 23:02:43 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-20220801230127-2732
AcquireTime: <unset>
RenewTime: Mon, 01 Aug 2022 23:41:29 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 01 Aug 2022 23:39:40 +0000 Mon, 01 Aug 2022 23:02:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 01 Aug 2022 23:39:40 +0000 Mon, 01 Aug 2022 23:02:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 01 Aug 2022 23:39:40 +0000 Mon, 01 Aug 2022 23:02:43 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 01 Aug 2022 23:39:40 +0000 Mon, 01 Aug 2022 23:02:58 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-20220801230127-2732
Capacity:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
System Info:
Machine ID: 4c192b04687c403f8fbb9bc7975b21b3
System UUID: 4c192b04687c403f8fbb9bc7975b21b3
Boot ID: 83b37af2-3c96-46d9-a03a-eed4700a0e91
Kernel Version: 5.10.102.1-microsoft-standard-WSL2
OS Image: Ubuntu 20.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.17
Kubelet Version: v1.24.3
Kube-Proxy Version: v1.24.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-54c4b5c49f-jwsmk 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35m
default hello-node-connect-578cdc45cb-cx7zl 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35m
default mysql-67f7d69d8b-zwxxj 600m (3%!)(MISSING) 700m (4%!)(MISSING) 512Mi (0%!)(MISSING) 700Mi (1%!)(MISSING) 34m
default nginx-svc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35m
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 34m
kube-system coredns-6d4b75cb6d-lxpgw 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 38m
kube-system etcd-functional-20220801230127-2732 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-apiserver-functional-20220801230127-2732 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 36m
kube-system kube-controller-manager-functional-20220801230127-2732 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-proxy-6w9xw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-scheduler-functional-20220801230127-2732 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (8%!)(MISSING) 700m (4%!)(MISSING)
memory 682Mi (1%!)(MISSING) 870Mi (1%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 36m kube-proxy
Normal Starting 37m kube-proxy
Normal Starting 38m kube-proxy
Normal NodeHasSufficientMemory 39m (x5 over 39m) kubelet Node functional-20220801230127-2732 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 39m (x5 over 39m) kubelet Node functional-20220801230127-2732 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 39m (x5 over 39m) kubelet Node functional-20220801230127-2732 status is now: NodeHasSufficientPID
Normal Starting 38m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 38m kubelet Node functional-20220801230127-2732 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 38m kubelet Node functional-20220801230127-2732 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 38m kubelet Node functional-20220801230127-2732 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 38m kubelet Updated Node Allocatable limit across pods
Normal NodeReady 38m kubelet Node functional-20220801230127-2732 status is now: NodeReady
Normal RegisteredNode 38m node-controller Node functional-20220801230127-2732 event: Registered Node functional-20220801230127-2732 in Controller
Normal RegisteredNode 37m node-controller Node functional-20220801230127-2732 event: Registered Node functional-20220801230127-2732 in Controller
Normal NodeAllocatableEnforced 36m kubelet Updated Node Allocatable limit across pods
Normal Starting 36m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 36m (x8 over 36m) kubelet Node functional-20220801230127-2732 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 36m (x8 over 36m) kubelet Node functional-20220801230127-2732 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 36m (x7 over 36m) kubelet Node functional-20220801230127-2732 status is now: NodeHasSufficientPID
Normal RegisteredNode 35m node-controller Node functional-20220801230127-2732 event: Registered Node functional-20220801230127-2732 in Controller
*
* ==> dmesg <==
* [Aug 1 23:16] WSL2: Performing memory compaction.
[Aug 1 23:17] WSL2: Performing memory compaction.
[Aug 1 23:18] WSL2: Performing memory compaction.
[Aug 1 23:19] WSL2: Performing memory compaction.
[Aug 1 23:20] WSL2: Performing memory compaction.
[Aug 1 23:21] WSL2: Performing memory compaction.
[Aug 1 23:22] WSL2: Performing memory compaction.
[Aug 1 23:23] WSL2: Performing memory compaction.
[Aug 1 23:24] WSL2: Performing memory compaction.
[Aug 1 23:25] WSL2: Performing memory compaction.
[Aug 1 23:26] WSL2: Performing memory compaction.
[Aug 1 23:27] WSL2: Performing memory compaction.
[Aug 1 23:28] WSL2: Performing memory compaction.
[Aug 1 23:29] WSL2: Performing memory compaction.
[Aug 1 23:30] WSL2: Performing memory compaction.
[Aug 1 23:31] WSL2: Performing memory compaction.
[Aug 1 23:32] WSL2: Performing memory compaction.
[Aug 1 23:33] WSL2: Performing memory compaction.
[Aug 1 23:34] WSL2: Performing memory compaction.
[Aug 1 23:35] WSL2: Performing memory compaction.
[Aug 1 23:37] WSL2: Performing memory compaction.
[Aug 1 23:38] WSL2: Performing memory compaction.
[Aug 1 23:39] WSL2: Performing memory compaction.
[Aug 1 23:40] WSL2: Performing memory compaction.
[Aug 1 23:41] WSL2: Performing memory compaction.
*
* ==> etcd [d2185e3d3dd7] <==
* {"level":"info","ts":"2022-08-01T23:03:55.929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
{"level":"info","ts":"2022-08-01T23:03:55.929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
{"level":"info","ts":"2022-08-01T23:03:55.930Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220801230127-2732 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2022-08-01T23:03:55.930Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-08-01T23:03:55.931Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-08-01T23:03:55.934Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-08-01T23:03:55.934Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-08-01T23:03:55.937Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-08-01T23:03:55.937Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
{"level":"warn","ts":"2022-08-01T23:04:02.525Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"187.6147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:120"}
{"level":"info","ts":"2022-08-01T23:04:02.525Z","caller":"traceutil/trace.go:171","msg":"trace[218533087] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:422; }","duration":"188.0336ms","start":"2022-08-01T23:04:02.337Z","end":"2022-08-01T23:04:02.525Z","steps":["trace[218533087] 'agreement among raft nodes before linearized reading' (duration: 91.7926ms)","trace[218533087] 'range keys from in-memory index tree' (duration: 95.7863ms)"],"step_count":2}
{"level":"warn","ts":"2022-08-01T23:04:11.436Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.4228ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-6w9xw\" ","response":"range_response_count:1 size:4557"}
{"level":"info","ts":"2022-08-01T23:04:11.436Z","caller":"traceutil/trace.go:171","msg":"trace[1678978260] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-6w9xw; range_end:; response_count:1; response_revision:495; }","duration":"100.8314ms","start":"2022-08-01T23:04:11.336Z","end":"2022-08-01T23:04:11.436Z","steps":["trace[1678978260] 'range keys from in-memory index tree' (duration: 100.0548ms)"],"step_count":1}
{"level":"warn","ts":"2022-08-01T23:04:11.436Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.8119ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/etcd-functional-20220801230127-2732.17075af79cec16cc\" ","response":"range_response_count:1 size:733"}
{"level":"warn","ts":"2022-08-01T23:04:11.436Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.9703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-08-01T23:04:11.437Z","caller":"traceutil/trace.go:171","msg":"trace[948299562] range","detail":"{range_begin:/registry/events/kube-system/etcd-functional-20220801230127-2732.17075af79cec16cc; range_end:; response_count:1; response_revision:495; }","duration":"101.4424ms","start":"2022-08-01T23:04:11.335Z","end":"2022-08-01T23:04:11.437Z","steps":["trace[948299562] 'range keys from in-memory index tree' (duration: 100.6685ms)"],"step_count":1}
{"level":"info","ts":"2022-08-01T23:04:11.437Z","caller":"traceutil/trace.go:171","msg":"trace[2019164215] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:495; }","duration":"109.6954ms","start":"2022-08-01T23:04:11.327Z","end":"2022-08-01T23:04:11.437Z","steps":["trace[2019164215] 'range keys from in-memory index tree' (duration: 108.5528ms)"],"step_count":1}
{"level":"info","ts":"2022-08-01T23:05:03.034Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-08-01T23:05:03.034Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"functional-20220801230127-2732","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
WARNING: 2022/08/01 23:05:03 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/08/01 23:05:03 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-08-01T23:05:03.134Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2022-08-01T23:05:03.326Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-08-01T23:05:03.327Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-08-01T23:05:03.327Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"functional-20220801230127-2732","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
*
* ==> etcd [e10bcdcd1e24] <==
* {"level":"warn","ts":"2022-08-01T23:08:09.937Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.6981327s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1132"}
{"level":"info","ts":"2022-08-01T23:08:09.939Z","caller":"traceutil/trace.go:171","msg":"trace[945031520] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:864; }","duration":"1.6996127s","start":"2022-08-01T23:08:08.239Z","end":"2022-08-01T23:08:09.939Z","steps":["trace[945031520] 'range keys from in-memory index tree' (duration: 1.697916s)"],"step_count":1}
{"level":"warn","ts":"2022-08-01T23:08:09.937Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.4916434s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:13595"}
{"level":"info","ts":"2022-08-01T23:08:09.939Z","caller":"traceutil/trace.go:171","msg":"trace[237188742] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:864; }","duration":"1.4933903s","start":"2022-08-01T23:08:08.446Z","end":"2022-08-01T23:08:09.939Z","steps":["trace[237188742] 'range keys from in-memory index tree' (duration: 1.491344s)"],"step_count":1}
{"level":"warn","ts":"2022-08-01T23:08:09.939Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-08-01T23:08:08.446Z","time spent":"1.4934591s","remote":"127.0.0.1:35694","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":13619,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
{"level":"warn","ts":"2022-08-01T23:08:09.939Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-08-01T23:08:08.239Z","time spent":"1.7000906s","remote":"127.0.0.1:35688","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1156,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
{"level":"info","ts":"2022-08-01T23:08:10.064Z","caller":"traceutil/trace.go:171","msg":"trace[706867528] linearizableReadLoop","detail":"{readStateIndex:951; appliedIndex:951; }","duration":"105.8641ms","start":"2022-08-01T23:08:09.958Z","end":"2022-08-01T23:08:10.064Z","steps":["trace[706867528] 'read index received' (duration: 105.8538ms)","trace[706867528] 'applied index is now lower than readState.Index' (duration: 6.9µs)"],"step_count":2}
{"level":"warn","ts":"2022-08-01T23:08:10.082Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"123.5924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-08-01T23:08:10.082Z","caller":"traceutil/trace.go:171","msg":"trace[947981202] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:865; }","duration":"123.7431ms","start":"2022-08-01T23:08:09.958Z","end":"2022-08-01T23:08:10.082Z","steps":["trace[947981202] 'agreement among raft nodes before linearized reading' (duration: 106.0266ms)","trace[947981202] 'range keys from in-memory index tree' (duration: 17.5436ms)"],"step_count":2}
{"level":"info","ts":"2022-08-01T23:15:25.144Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":968}
{"level":"info","ts":"2022-08-01T23:15:25.146Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":968,"took":"1.5152ms"}
{"level":"warn","ts":"2022-08-01T23:17:42.436Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.083ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128014757671386791 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:1266 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128014757671386789 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >>","response":"size:16"}
{"level":"info","ts":"2022-08-01T23:17:42.436Z","caller":"traceutil/trace.go:171","msg":"trace[1492947220] transaction","detail":"{read_only:false; response_revision:1273; number_of_response:1; }","duration":"107.1443ms","start":"2022-08-01T23:17:42.329Z","end":"2022-08-01T23:17:42.436Z","steps":["trace[1492947220] 'compare' (duration: 105.8742ms)"],"step_count":1}
{"level":"info","ts":"2022-08-01T23:20:25.165Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1178}
{"level":"info","ts":"2022-08-01T23:20:25.166Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1178,"took":"610.1µs"}
{"level":"info","ts":"2022-08-01T23:25:25.183Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1387}
{"level":"info","ts":"2022-08-01T23:25:25.184Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1387,"took":"673µs"}
{"level":"info","ts":"2022-08-01T23:30:25.204Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1597}
{"level":"info","ts":"2022-08-01T23:30:25.205Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1597,"took":"869.2µs"}
{"level":"warn","ts":"2022-08-01T23:32:30.036Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.3033ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:353"}
{"level":"info","ts":"2022-08-01T23:32:30.036Z","caller":"traceutil/trace.go:171","msg":"trace[8672625] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:1893; }","duration":"100.4845ms","start":"2022-08-01T23:32:29.936Z","end":"2022-08-01T23:32:30.036Z","steps":["trace[8672625] 'range keys from in-memory index tree' (duration: 100.0139ms)"],"step_count":1}
{"level":"info","ts":"2022-08-01T23:35:25.214Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1807}
{"level":"info","ts":"2022-08-01T23:35:25.215Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1807,"took":"590.6µs"}
{"level":"info","ts":"2022-08-01T23:40:25.233Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2016}
{"level":"info","ts":"2022-08-01T23:40:25.234Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2016,"took":"671.2µs"}
*
* ==> kernel <==
* 23:41:36 up 57 min, 0 users, load average: 0.15, 0.31, 0.54
Linux functional-20220801230127-2732 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.4 LTS"
*
* ==> kube-apiserver [4e16bf1812d1] <==
* I0801 23:06:37.641139 1 trace.go:205] Trace[1912207879]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:9b2e4827-613e-4dae-b94e-619c357a01b8,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (01-Aug-2022 23:06:36.549) (total time: 1091ms):
Trace[1912207879]: ---"Listing from storage done" 1090ms (23:06:37.640)
Trace[1912207879]: [1.0910689s] [1.0910689s] END
I0801 23:07:18.217982 1 alloc.go:327] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.110.152.83]
I0801 23:07:53.448406 1 trace.go:205] Trace[1294519492]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (01-Aug-2022 23:07:52.149) (total time: 1298ms):
Trace[1294519492]: ---"Transaction committed" 1217ms (23:07:53.448)
Trace[1294519492]: [1.2982445s] [1.2982445s] END
I0801 23:07:53.456118 1 trace.go:205] Trace[1565011814]: "List(recursive=true) etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (01-Aug-2022 23:07:52.442) (total time: 1013ms):
Trace[1565011814]: [1.0134906s] [1.0134906s] END
I0801 23:07:53.456901 1 trace.go:205] Trace[657691485]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:058100f1-406e-473e-9624-5586baf92b4a,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (01-Aug-2022 23:07:52.442) (total time: 1014ms):
Trace[657691485]: ---"Listing from storage done" 1013ms (23:07:53.456)
Trace[657691485]: [1.014315s] [1.014315s] END
I0801 23:07:53.529138 1 trace.go:205] Trace[794587613]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:3e159add-898d-433c-9e27-0a54facb18d2,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (01-Aug-2022 23:07:52.544) (total time: 984ms):
Trace[794587613]: ---"About to write a response" 984ms (23:07:53.528)
Trace[794587613]: [984.6812ms] [984.6812ms] END
I0801 23:08:09.941560 1 trace.go:205] Trace[290458040]: "List(recursive=true) etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (01-Aug-2022 23:08:08.444) (total time: 1497ms):
Trace[290458040]: [1.4973102s] [1.4973102s] END
I0801 23:08:09.941831 1 trace.go:205] Trace[379637793]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:d9efc82e-4319-4c35-b1d7-d335cb68a23b,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (01-Aug-2022 23:08:08.238) (total time: 1703ms):
Trace[379637793]: ---"About to write a response" 1703ms (23:08:09.941)
Trace[379637793]: [1.7033552s] [1.7033552s] END
I0801 23:08:09.942194 1 trace.go:205] Trace[1614929798]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:b407f42c-9c17-474d-9115-763b7402997a,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (01-Aug-2022 23:08:08.444) (total time: 1498ms):
Trace[1614929798]: ---"Listing from storage done" 1497ms (23:08:09.941)
Trace[1614929798]: [1.4980544s] [1.4980544s] END
W0801 23:18:53.304070 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
W0801 23:27:06.288422 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
*
* ==> kube-controller-manager [197cbc6d9d4e] <==
* I0801 23:04:15.328697 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I0801 23:04:15.329746 1 shared_informer.go:262] Caches are synced for attach detach
I0801 23:04:15.328706 1 shared_informer.go:262] Caches are synced for namespace
I0801 23:04:15.328713 1 shared_informer.go:262] Caches are synced for endpoint
I0801 23:04:15.328720 1 shared_informer.go:262] Caches are synced for node
I0801 23:04:15.328742 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0801 23:04:15.330272 1 range_allocator.go:173] Starting range CIDR allocator
I0801 23:04:15.328768 1 shared_informer.go:262] Caches are synced for daemon sets
I0801 23:04:15.330287 1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
I0801 23:04:15.330308 1 shared_informer.go:262] Caches are synced for cidrallocator
I0801 23:04:15.329194 1 event.go:294] "Event occurred" object="functional-20220801230127-2732" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20220801230127-2732 event: Registered Node functional-20220801230127-2732 in Controller"
I0801 23:04:15.329205 1 shared_informer.go:262] Caches are synced for job
I0801 23:04:15.329391 1 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal.
I0801 23:04:15.333684 1 shared_informer.go:262] Caches are synced for GC
I0801 23:04:15.335733 1 shared_informer.go:255] Waiting for caches to sync for garbage collector
I0801 23:04:15.426945 1 shared_informer.go:262] Caches are synced for HPA
I0801 23:04:15.431610 1 shared_informer.go:262] Caches are synced for resource quota
I0801 23:04:15.444057 1 shared_informer.go:262] Caches are synced for disruption
I0801 23:04:15.444183 1 disruption.go:371] Sending events to api server.
I0801 23:04:15.454803 1 shared_informer.go:262] Caches are synced for resource quota
I0801 23:04:15.456435 1 shared_informer.go:262] Caches are synced for deployment
I0801 23:04:15.525426 1 shared_informer.go:262] Caches are synced for ReplicaSet
I0801 23:04:15.938684 1 shared_informer.go:262] Caches are synced for garbage collector
I0801 23:04:15.938962 1 shared_informer.go:262] Caches are synced for garbage collector
I0801 23:04:15.938983 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-controller-manager [a4d33a5b3f4a] <==
* I0801 23:05:45.126717 1 shared_informer.go:262] Caches are synced for daemon sets
I0801 23:05:45.126749 1 shared_informer.go:262] Caches are synced for attach detach
I0801 23:05:45.126880 1 shared_informer.go:262] Caches are synced for PV protection
I0801 23:05:45.133005 1 shared_informer.go:262] Caches are synced for HPA
I0801 23:05:45.133142 1 shared_informer.go:262] Caches are synced for cronjob
I0801 23:05:45.145566 1 shared_informer.go:255] Waiting for caches to sync for garbage collector
I0801 23:05:45.229109 1 shared_informer.go:262] Caches are synced for crt configmap
I0801 23:05:45.230593 1 shared_informer.go:262] Caches are synced for bootstrap_signer
I0801 23:05:45.231836 1 shared_informer.go:262] Caches are synced for deployment
I0801 23:05:45.237224 1 shared_informer.go:262] Caches are synced for resource quota
I0801 23:05:45.237347 1 shared_informer.go:262] Caches are synced for stateful set
I0801 23:05:45.237970 1 shared_informer.go:262] Caches are synced for resource quota
I0801 23:05:45.253376 1 shared_informer.go:262] Caches are synced for disruption
I0801 23:05:45.253621 1 disruption.go:371] Sending events to api server.
I0801 23:05:45.643506 1 shared_informer.go:262] Caches are synced for garbage collector
I0801 23:05:45.643678 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0801 23:05:45.645851 1 shared_informer.go:262] Caches are synced for garbage collector
I0801 23:06:18.738343 1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0801 23:06:18.738410 1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0801 23:06:21.638350 1 event.go:294] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-578cdc45cb to 1"
I0801 23:06:21.926737 1 event.go:294] "Event occurred" object="default/hello-node-connect-578cdc45cb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-578cdc45cb-cx7zl"
I0801 23:06:23.732107 1 event.go:294] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-54c4b5c49f to 1"
I0801 23:06:23.739309 1 event.go:294] "Event occurred" object="default/hello-node-54c4b5c49f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-54c4b5c49f-jwsmk"
I0801 23:07:18.348052 1 event.go:294] "Event occurred" object="default/mysql" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-67f7d69d8b to 1"
I0801 23:07:18.434654 1 event.go:294] "Event occurred" object="default/mysql-67f7d69d8b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-67f7d69d8b-zwxxj"
*
* ==> kube-proxy [4f96594e8744] <==
* I0801 23:05:34.442254 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I0801 23:05:34.526268 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I0801 23:05:34.529615 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0801 23:05:34.532554 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0801 23:05:34.535695 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
I0801 23:05:34.552132 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I0801 23:05:34.552277 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I0801 23:05:34.552306 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0801 23:05:34.837824 1 server_others.go:206] "Using iptables Proxier"
I0801 23:05:34.837977 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0801 23:05:34.838004 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0801 23:05:34.838033 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0801 23:05:34.838096 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0801 23:05:34.838504 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0801 23:05:34.839098 1 server.go:661] "Version info" version="v1.24.3"
I0801 23:05:34.839116 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0801 23:05:34.840629 1 config.go:317] "Starting service config controller"
I0801 23:05:34.840692 1 shared_informer.go:255] Waiting for caches to sync for service config
I0801 23:05:34.840739 1 config.go:226] "Starting endpoint slice config controller"
I0801 23:05:34.840753 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0801 23:05:34.840750 1 config.go:444] "Starting node config controller"
I0801 23:05:34.840770 1 shared_informer.go:255] Waiting for caches to sync for node config
I0801 23:05:34.941234 1 shared_informer.go:262] Caches are synced for node config
I0801 23:05:34.941378 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0801 23:05:34.941463 1 shared_informer.go:262] Caches are synced for service config
*
* ==> kube-proxy [823e9f99bb4c] <==
* I0801 23:03:52.332464 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0801 23:03:52.335226 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
E0801 23:03:52.339379 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220801230127-2732": dial tcp 192.168.49.2:8441: connect: connection refused
E0801 23:03:53.529005 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220801230127-2732": dial tcp 192.168.49.2:8441: connect: connection refused
E0801 23:04:02.330672 1 node.go:152] Failed to retrieve node info: nodes "functional-20220801230127-2732" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
I0801 23:04:06.525533 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I0801 23:04:06.525661 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I0801 23:04:06.525766 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0801 23:04:06.691021 1 server_others.go:206] "Using iptables Proxier"
I0801 23:04:06.691159 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0801 23:04:06.691176 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0801 23:04:06.691191 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0801 23:04:06.691223 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0801 23:04:06.691777 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0801 23:04:06.692134 1 server.go:661] "Version info" version="v1.24.3"
I0801 23:04:06.692148 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0801 23:04:06.692829 1 config.go:226] "Starting endpoint slice config controller"
I0801 23:04:06.726070 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0801 23:04:06.726160 1 config.go:444] "Starting node config controller"
I0801 23:04:06.726183 1 shared_informer.go:255] Waiting for caches to sync for node config
I0801 23:04:06.727889 1 config.go:317] "Starting service config controller"
I0801 23:04:06.728135 1 shared_informer.go:255] Waiting for caches to sync for service config
I0801 23:04:06.827611 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0801 23:04:06.827768 1 shared_informer.go:262] Caches are synced for node config
I0801 23:04:06.829379 1 shared_informer.go:262] Caches are synced for service config
*
* ==> kube-scheduler [839c2c5364fd] <==
* I0801 23:05:24.435640 1 serving.go:348] Generated self-signed cert in-memory
I0801 23:05:29.427162 1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
I0801 23:05:29.427225 1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0801 23:05:29.440229 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0801 23:05:29.440472 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0801 23:05:29.440589 1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
I0801 23:05:29.440558 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0801 23:05:29.440768 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0801 23:05:29.440869 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0801 23:05:29.441108 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0801 23:05:29.440367 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0801 23:05:29.541548 1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
I0801 23:05:29.541760 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0801 23:05:29.541999 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [929b67391103] <==
* W0801 23:05:08.497457 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0801 23:05:08.497605 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0801 23:05:08.617390 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0801 23:05:08.617600 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0801 23:05:08.624335 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0801 23:05:08.624457 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0801 23:05:08.632125 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0801 23:05:08.632245 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0801 23:05:08.670027 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0801 23:05:08.670163 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0801 23:05:08.722293 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0801 23:05:08.722423 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0801 23:05:08.734205 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0801 23:05:08.734332 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0801 23:05:08.762536 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0801 23:05:08.762731 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0801 23:05:08.776963 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0801 23:05:08.777133 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0801 23:05:08.873167 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0801 23:05:08.873368 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
I0801 23:05:08.877587 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0801 23:05:08.877620 1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0801 23:05:08.877744 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0801 23:05:08.878268 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
E0801 23:05:08.878881 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Logs begin at Mon 2022-08-01 23:02:10 UTC, end at Mon 2022-08-01 23:41:37 UTC. --
Aug 01 23:06:25 functional-20220801230127-2732 kubelet[9882]: I0801 23:06:25.034218 9882 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="d1c65be7eb5939869454db720db489baf6484853525ea214eae488d9fd3f97d4"
Aug 01 23:06:27 functional-20220801230127-2732 kubelet[9882]: I0801 23:06:27.245445 9882 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="907ad7d7154467e4520ba8a80dd4ad2082242641fe391ee596604b5fbd3ecadf"
Aug 01 23:06:53 functional-20220801230127-2732 kubelet[9882]: I0801 23:06:53.432814 9882 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"mypd\" (UniqueName: \"kubernetes.io/host-path/a86ddac3-9045-48e4-a7ef-69b87aad1d1c-pvc-3df1bf2c-7d4c-43a7-bc80-14bf73e52fcc\") pod \"a86ddac3-9045-48e4-a7ef-69b87aad1d1c\" (UID: \"a86ddac3-9045-48e4-a7ef-69b87aad1d1c\") "
Aug 01 23:06:53 functional-20220801230127-2732 kubelet[9882]: I0801 23:06:53.432979 9882 operation_generator.go:856] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a86ddac3-9045-48e4-a7ef-69b87aad1d1c-pvc-3df1bf2c-7d4c-43a7-bc80-14bf73e52fcc" (OuterVolumeSpecName: "mypd") pod "a86ddac3-9045-48e4-a7ef-69b87aad1d1c" (UID: "a86ddac3-9045-48e4-a7ef-69b87aad1d1c"). InnerVolumeSpecName "pvc-3df1bf2c-7d4c-43a7-bc80-14bf73e52fcc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Aug 01 23:06:53 functional-20220801230127-2732 kubelet[9882]: I0801 23:06:53.433029 9882 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hszcc\" (UniqueName: \"kubernetes.io/projected/a86ddac3-9045-48e4-a7ef-69b87aad1d1c-kube-api-access-hszcc\") pod \"a86ddac3-9045-48e4-a7ef-69b87aad1d1c\" (UID: \"a86ddac3-9045-48e4-a7ef-69b87aad1d1c\") "
Aug 01 23:06:53 functional-20220801230127-2732 kubelet[9882]: I0801 23:06:53.433237 9882 reconciler.go:312] "Volume detached for volume \"pvc-3df1bf2c-7d4c-43a7-bc80-14bf73e52fcc\" (UniqueName: \"kubernetes.io/host-path/a86ddac3-9045-48e4-a7ef-69b87aad1d1c-pvc-3df1bf2c-7d4c-43a7-bc80-14bf73e52fcc\") on node \"functional-20220801230127-2732\" DevicePath \"\""
Aug 01 23:06:53 functional-20220801230127-2732 kubelet[9882]: I0801 23:06:53.437879 9882 operation_generator.go:856] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a86ddac3-9045-48e4-a7ef-69b87aad1d1c-kube-api-access-hszcc" (OuterVolumeSpecName: "kube-api-access-hszcc") pod "a86ddac3-9045-48e4-a7ef-69b87aad1d1c" (UID: "a86ddac3-9045-48e4-a7ef-69b87aad1d1c"). InnerVolumeSpecName "kube-api-access-hszcc". PluginName "kubernetes.io/projected", VolumeGidValue ""
Aug 01 23:06:53 functional-20220801230127-2732 kubelet[9882]: I0801 23:06:53.534091 9882 reconciler.go:312] "Volume detached for volume \"kube-api-access-hszcc\" (UniqueName: \"kubernetes.io/projected/a86ddac3-9045-48e4-a7ef-69b87aad1d1c-kube-api-access-hszcc\") on node \"functional-20220801230127-2732\" DevicePath \"\""
Aug 01 23:06:54 functional-20220801230127-2732 kubelet[9882]: I0801 23:06:54.358759 9882 scope.go:110] "RemoveContainer" containerID="18496a7b523c3d72d8508a78e3c7898ac87fa21a2abeed5d816756c03cf94a19"
Aug 01 23:06:55 functional-20220801230127-2732 kubelet[9882]: I0801 23:06:55.140528 9882 topology_manager.go:200] "Topology Admit Handler"
Aug 01 23:06:55 functional-20220801230127-2732 kubelet[9882]: E0801 23:06:55.140911 9882 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="a86ddac3-9045-48e4-a7ef-69b87aad1d1c" containerName="myfrontend"
Aug 01 23:06:55 functional-20220801230127-2732 kubelet[9882]: I0801 23:06:55.141018 9882 memory_manager.go:345] "RemoveStaleState removing state" podUID="a86ddac3-9045-48e4-a7ef-69b87aad1d1c" containerName="myfrontend"
Aug 01 23:06:55 functional-20220801230127-2732 kubelet[9882]: I0801 23:06:55.347928 9882 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmt44\" (UniqueName: \"kubernetes.io/projected/1a4584f1-bcf5-4729-9f1d-295912aed002-kube-api-access-qmt44\") pod \"sp-pod\" (UID: \"1a4584f1-bcf5-4729-9f1d-295912aed002\") " pod="default/sp-pod"
Aug 01 23:06:55 functional-20220801230127-2732 kubelet[9882]: I0801 23:06:55.348168 9882 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3df1bf2c-7d4c-43a7-bc80-14bf73e52fcc\" (UniqueName: \"kubernetes.io/host-path/1a4584f1-bcf5-4729-9f1d-295912aed002-pvc-3df1bf2c-7d4c-43a7-bc80-14bf73e52fcc\") pod \"sp-pod\" (UID: \"1a4584f1-bcf5-4729-9f1d-295912aed002\") " pod="default/sp-pod"
Aug 01 23:06:55 functional-20220801230127-2732 kubelet[9882]: I0801 23:06:55.748750 9882 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a86ddac3-9045-48e4-a7ef-69b87aad1d1c path="/var/lib/kubelet/pods/a86ddac3-9045-48e4-a7ef-69b87aad1d1c/volumes"
Aug 01 23:06:57 functional-20220801230127-2732 kubelet[9882]: I0801 23:06:57.328759 9882 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="96d702ac472af38e0a9dcd53fe12acf8b08e02dfb592b37f5c6d2da9b395c88c"
Aug 01 23:07:18 functional-20220801230127-2732 kubelet[9882]: I0801 23:07:18.460780 9882 topology_manager.go:200] "Topology Admit Handler"
Aug 01 23:07:18 functional-20220801230127-2732 kubelet[9882]: I0801 23:07:18.635114 9882 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppj77\" (UniqueName: \"kubernetes.io/projected/113c5b9b-ce3b-4140-80db-c25cb2a594c9-kube-api-access-ppj77\") pod \"mysql-67f7d69d8b-zwxxj\" (UID: \"113c5b9b-ce3b-4140-80db-c25cb2a594c9\") " pod="default/mysql-67f7d69d8b-zwxxj"
Aug 01 23:10:19 functional-20220801230127-2732 kubelet[9882]: W0801 23:10:19.833187 9882 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Aug 01 23:15:19 functional-20220801230127-2732 kubelet[9882]: W0801 23:15:19.833162 9882 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Aug 01 23:20:19 functional-20220801230127-2732 kubelet[9882]: W0801 23:20:19.833660 9882 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Aug 01 23:25:19 functional-20220801230127-2732 kubelet[9882]: W0801 23:25:19.834535 9882 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Aug 01 23:30:19 functional-20220801230127-2732 kubelet[9882]: W0801 23:30:19.841626 9882 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Aug 01 23:35:19 functional-20220801230127-2732 kubelet[9882]: W0801 23:35:19.837477 9882 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Aug 01 23:40:19 functional-20220801230127-2732 kubelet[9882]: W0801 23:40:19.837784 9882 sysinfo.go:203] Nodes topology is not available, providing CPU topology
*
* ==> storage-provisioner [02fa1005162d] <==
* I0801 23:04:10.431473 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0801 23:04:10.461511 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0801 23:04:10.461633 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0801 23:04:27.967256 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0801 23:04:27.967590 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220801230127-2732_9e23c108-4bb0-4317-a03b-b78abf5cd108!
I0801 23:04:27.967610 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2909778d-a1ea-4813-abe0-a8c3291cf1c7", APIVersion:"v1", ResourceVersion:"538", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220801230127-2732_9e23c108-4bb0-4317-a03b-b78abf5cd108 became leader
I0801 23:04:28.067938 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220801230127-2732_9e23c108-4bb0-4317-a03b-b78abf5cd108!
*
* ==> storage-provisioner [57453622dc32] <==
* I0801 23:05:34.539157 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0801 23:05:34.551591 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0801 23:05:34.551705 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0801 23:05:52.225270 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0801 23:05:52.225708 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220801230127-2732_92c1453d-b96b-4a9a-a220-16d056aeb040!
I0801 23:05:52.225747 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2909778d-a1ea-4813-abe0-a8c3291cf1c7", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220801230127-2732_92c1453d-b96b-4a9a-a220-16d056aeb040 became leader
I0801 23:05:52.327133 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220801230127-2732_92c1453d-b96b-4a9a-a220-16d056aeb040!
I0801 23:06:18.738034 1 controller.go:1332] provision "default/myclaim" class "standard": started
I0801 23:06:18.738415 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard ce29a959-9aa9-4b7e-be41-7ee99a6bfde9 381 0 2022-08-01 23:03:06 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-08-01 23:03:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-3df1bf2c-7d4c-43a7-bc80-14bf73e52fcc &PersistentVolumeClaim{ObjectMeta:{myclaim default 3df1bf2c-7d4c-43a7-bc80-14bf73e52fcc 683 0 2022-08-01 23:06:18 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2022-08-01 23:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2022-08-01 23:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I0801 23:06:18.739212 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3df1bf2c-7d4c-43a7-bc80-14bf73e52fcc", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I0801 23:06:18.739543 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-3df1bf2c-7d4c-43a7-bc80-14bf73e52fcc" provisioned
I0801 23:06:18.739574 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I0801 23:06:18.739586 1 volume_store.go:212] Trying to save persistentvolume "pvc-3df1bf2c-7d4c-43a7-bc80-14bf73e52fcc"
I0801 23:06:18.757679 1 volume_store.go:219] persistentvolume "pvc-3df1bf2c-7d4c-43a7-bc80-14bf73e52fcc" saved
I0801 23:06:18.758370 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3df1bf2c-7d4c-43a7-bc80-14bf73e52fcc", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-3df1bf2c-7d4c-43a7-bc80-14bf73e52fcc
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220801230127-2732 -n functional-20220801230127-2732
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220801230127-2732 -n functional-20220801230127-2732: (3.4809236s)
helpers_test.go:261: (dbg) Run: kubectl --context functional-20220801230127-2732 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context functional-20220801230127-2732 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-20220801230127-2732 describe pod : exit status 1 (184.1687ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context functional-20220801230127-2732 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (2118.90s)