=== RUN TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run: kubectl --context functional-20220602172845-12108 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run: kubectl --context functional-20220602172845-12108 expose deployment hello-node --type=NodePort --port=8080
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1438: (dbg) Done: kubectl --context functional-20220602172845-12108 expose deployment hello-node --type=NodePort --port=8080: (1.6921578s)
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-l5tpx" [9b6b8a79-b2d8-4a4f-b2d8-cad582357bb9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
=== CONT TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-l5tpx" [9b6b8a79-b2d8-4a4f-b2d8-cad582357bb9] Running
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 30.1971078s
functional_test.go:1448: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 service list
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 service list: (7.0574951s)
functional_test.go:1462: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 service --namespace=default --https --url hello-node
functional_test.go:1391: Failed to sent interrupt to proc not supported by windows
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 service --namespace=default --https --url hello-node: exit status 1 (33m30.8493724s)
-- stdout --
https://127.0.0.1:51437
-- /stdout --
** stderr **
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
** /stderr **
functional_test.go:1464: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-20220602172845-12108 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1401: service test failed - dumping debug information
functional_test.go:1402: -----------------------service failure post-mortem--------------------------------
functional_test.go:1405: (dbg) Run: kubectl --context functional-20220602172845-12108 describe po hello-node
functional_test.go:1409: hello-node pod describe:
Name: hello-node-54fbb85-l5tpx
Namespace: default
Priority: 0
Node: functional-20220602172845-12108/192.168.49.2
Start Time: Thu, 02 Jun 2022 17:34:35 +0000
Labels: app=hello-node
pod-template-hash=54fbb85
Annotations: <none>
Status: Running
IP: 172.17.0.6
IPs:
IP: 172.17.0.6
Controlled By: ReplicaSet/hello-node-54fbb85
Containers:
echoserver:
Container ID: docker://149fe597618122dc6fb4eb7fb2f007100a5f6db1bb8b5ca9b5a2e43bb9452bfb
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 02 Jun 2022 17:35:00 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g8447 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-g8447:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-54fbb85-l5tpx to functional-20220602172845-12108
Normal Pulling 34m kubelet, functional-20220602172845-12108 Pulling image "k8s.gcr.io/echoserver:1.8"
Normal Pulled 33m kubelet, functional-20220602172845-12108 Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 18.4299733s
Normal Created 33m kubelet, functional-20220602172845-12108 Created container echoserver
Normal Started 33m kubelet, functional-20220602172845-12108 Started container echoserver
Name: hello-node-connect-74cf8bc446-qjhfg
Namespace: default
Priority: 0
Node: functional-20220602172845-12108/192.168.49.2
Start Time: Thu, 02 Jun 2022 17:34:35 +0000
Labels: app=hello-node-connect
pod-template-hash=74cf8bc446
Annotations: <none>
Status: Running
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/hello-node-connect-74cf8bc446
Containers:
echoserver:
Container ID: docker://2c4191d862c838b4c8915a49753b8eec5c08916808271c4bc7711a2bc88598f9
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 02 Jun 2022 17:35:00 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kn7cx (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-kn7cx:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-connect-74cf8bc446-qjhfg to functional-20220602172845-12108
Normal Pulling 34m kubelet, functional-20220602172845-12108 Pulling image "k8s.gcr.io/echoserver:1.8"
Normal Pulled 33m kubelet, functional-20220602172845-12108 Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 18.8132659s
Normal Created 33m kubelet, functional-20220602172845-12108 Created container echoserver
Normal Started 33m kubelet, functional-20220602172845-12108 Started container echoserver
functional_test.go:1411: (dbg) Run: kubectl --context functional-20220602172845-12108 logs -l app=hello-node
functional_test.go:1415: hello-node logs:
functional_test.go:1417: (dbg) Run: kubectl --context functional-20220602172845-12108 describe svc hello-node
functional_test.go:1421: hello-node svc describe:
Name: hello-node
Namespace: default
Labels: app=hello-node
Annotations: <none>
Selector: app=hello-node
Type: NodePort
IP: 10.99.217.46
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31650/TCP
Endpoints: 172.17.0.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect functional-20220602172845-12108
helpers_test.go:231: (dbg) Done: docker inspect functional-20220602172845-12108: (1.0647881s)
helpers_test.go:235: (dbg) docker inspect functional-20220602172845-12108:
-- stdout --
[
{
"Id": "297d7bd3193981c56923ea9a67f965b5b93d32c530a88cae0db3b424773c6ad5",
"Created": "2022-06-02T17:29:37.4256166Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 21052,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-06-02T17:29:38.4958531Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
"ResolvConfPath": "/var/lib/docker/containers/297d7bd3193981c56923ea9a67f965b5b93d32c530a88cae0db3b424773c6ad5/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/297d7bd3193981c56923ea9a67f965b5b93d32c530a88cae0db3b424773c6ad5/hostname",
"HostsPath": "/var/lib/docker/containers/297d7bd3193981c56923ea9a67f965b5b93d32c530a88cae0db3b424773c6ad5/hosts",
"LogPath": "/var/lib/docker/containers/297d7bd3193981c56923ea9a67f965b5b93d32c530a88cae0db3b424773c6ad5/297d7bd3193981c56923ea9a67f965b5b93d32c530a88cae0db3b424773c6ad5-json.log",
"Name": "/functional-20220602172845-12108",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-20220602172845-12108:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-20220602172845-12108",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4194304000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/7b1566f809bb5cea6f1e60492a928281ca264fcb5dd82ca9a266c98413582f4e-init/diff:/var/lib/docker/overlay2/dfce970b43800856c522d9750e5e1364e8adf4be4cf71ca7c53d79b33355f5a7/diff:/var/lib/docker/overlay2/4fd23a1b84854239f1bb855d05e42ecd6acbd1b0944b347813a56f5f45356a42/diff:/var/lib/docker/overlay2/864c5b1fbc297750771bb843fdeb4bafa10868a71716f4a01f1119609fb34667/diff:/var/lib/docker/overlay2/0f11f6855118857c743b90ca120ff7aa550f8157d475abf59df950433a5bc6e8/diff:/var/lib/docker/overlay2/2ae7f559725a060dc3b3a9c2fbd554b98114ae47dbf8db75f13bd8a95cbae19a/diff:/var/lib/docker/overlay2/48f41ac288d1037223ac101e6bc07f05729cdcecd98cc85971db99e90765c437/diff:/var/lib/docker/overlay2/8d4eaae639ade3ad3459b4fb67dbcac83774b72a2550b0a4bca1f21d122b20e6/diff:/var/lib/docker/overlay2/e06515bb91756221300de52336376d32ef9bd8685a92352e522936c4947b88ee/diff:/var/lib/docker/overlay2/a2f615fb794b704dc3823080c47e2c357cf4826ec91f6ae190c7497bb18a80cd/diff:/var/lib/docker/overlay2/22f99f
8a3da21c6e2be4c5c5e9d969af73e7695aaf9b0c7d0d09b5795ba76416/diff:/var/lib/docker/overlay2/9c0266785c64b9f6c471863067ca9db045a5aa61167a7817217cf01825a7d868/diff:/var/lib/docker/overlay2/b8a0250c9ae7d899ee3e46414c2db7f7ba363793900f8fcbf1b470586ebe7bd9/diff:/var/lib/docker/overlay2/00afbeac619cb9c06d4da311f5fc5aa3f5147b88b291acf06d4c4b36984ad5a2/diff:/var/lib/docker/overlay2/da51241ed08bd861b9d27902198eae13c3e4aac5c79f522e9f3fa209ea35e8d3/diff:/var/lib/docker/overlay2/b01176f7dbe98e3004db7c0fe45d94616a803dd8ae9cbdf3a1f2a188604178af/diff:/var/lib/docker/overlay2/0ebb0ff0177c8116e72a14ac704b161f75922cea05fe804ad1f7b83f4cd3dd70/diff:/var/lib/docker/overlay2/bae8d175bc3e334a70aaa239643efa0e8b453ab163f077d9cef60e3840c717ba/diff:/var/lib/docker/overlay2/e72a79f763a44dc32f9a2e84dc5e28a060e7fbb9f4624cb8aaa084dd356522ec/diff:/var/lib/docker/overlay2/2e1bc304b205033ad7f49fb8db243b0991596e0eec913fd13e8382aa25767e21/diff:/var/lib/docker/overlay2/ebb9b39dedfc09f9f34ea879f56a8ffd24ab9f9bf8acc93aa9df5eb93dba58e8/diff:/var/lib/d
ocker/overlay2/bffdca36eba4bce9086f2c269bcfe5b915d807483717f0e27acbd51b5bbfc11b/diff:/var/lib/docker/overlay2/96c321cbf06c0050c8a0a7897e9533db1ee5788eb09b1e1d605bdd1134af8eca/diff:/var/lib/docker/overlay2/735422b44af98e330209fe1c4273bf57aa33fcfd770f3e9d6f1a6e59f7545920/diff:/var/lib/docker/overlay2/8dc177c0589f67ded7d9c229d3c587fe77b3d1c68cf0a5af871bc23768d67d84/diff:/var/lib/docker/overlay2/9a29541ccfee3849e0691950c599bb7e4e51d9026724b1ad13abc8d8e9c140e0/diff:/var/lib/docker/overlay2/50fe1dc8f357b5d624681e6f14d98e6d33a8b6b53d70293ba90ac4435a1e18d8/diff:/var/lib/docker/overlay2/86f301a296dbb7422a3d55a008a9f38278a7a19d68a0f735d298c0c2a431ee30/diff:/var/lib/docker/overlay2/dc8087ea592587f8cb5392cc0ee739c33f2724c47b83767d593b3065914820b0/diff:/var/lib/docker/overlay2/15163601889f0d414f35ccd64ae33a52958605b5b7e50618ed5d4f4bd06ec65b/diff:/var/lib/docker/overlay2/a50cf19d9d69b9c68c6c66a918cbde678b49e8d566d06772af22bf99191b08f3/diff:/var/lib/docker/overlay2/621f3b0fc578721c5d0465771ad007f022ed238fa5a2076f807c077680c
26d27/diff:/var/lib/docker/overlay2/2652f9ffde92786a77e3bb35fe07c03a623aaad541f0ca9710839800c4b470e4/diff:/var/lib/docker/overlay2/c853755ee76ea55ad6c00f5eaff82196f4953ee6fb2d27e27ba35f86d56bfc32/diff:/var/lib/docker/overlay2/a0f70e6416a8e618ea7475b5e7f4cdc9a66ac39f0a6c1969c569d8e4f0b5e9eb/diff:/var/lib/docker/overlay2/275d2c643ecb011298df16e0794bebb9a7ec82e190aea53a90369288c521f75e/diff:/var/lib/docker/overlay2/a7e78f238badc23c2c38b7e9b9c4428c0614e825744076161295740d46a20957/diff:/var/lib/docker/overlay2/39fcd4c392271449973511a31d445289c1f8d378d01759fef12c430c9f44f2b8/diff:/var/lib/docker/overlay2/e1c51360d327e86575fe8248415fae12e9dbdde580db0e6f4f4e485ac9f92e3b/diff:/var/lib/docker/overlay2/fecd88783858177cbe3b751f0717b370c5556d7cf0ef163e2710f16fce09d53c/diff:/var/lib/docker/overlay2/3b4c7afaac6f5818bc33bec8c0ec442eb5a1010d0de6fe488460ee83a3901b21/diff:/var/lib/docker/overlay2/47d0047bc42c34ea02c33c1500f96c5109f27f84f973a5636832bbc855761e3f/diff",
"MergedDir": "/var/lib/docker/overlay2/7b1566f809bb5cea6f1e60492a928281ca264fcb5dd82ca9a266c98413582f4e/merged",
"UpperDir": "/var/lib/docker/overlay2/7b1566f809bb5cea6f1e60492a928281ca264fcb5dd82ca9a266c98413582f4e/diff",
"WorkDir": "/var/lib/docker/overlay2/7b1566f809bb5cea6f1e60492a928281ca264fcb5dd82ca9a266c98413582f4e/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "functional-20220602172845-12108",
"Source": "/var/lib/docker/volumes/functional-20220602172845-12108/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "functional-20220602172845-12108",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-20220602172845-12108",
"name.minikube.sigs.k8s.io": "functional-20220602172845-12108",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "efa325e477cb33d07c8eb59e8986c67cdb7a0c9d9485f8e2e3620d01ceacb8a6",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "51168"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "51169"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "51170"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "51171"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "51172"
}
]
},
"SandboxKey": "/var/run/docker/netns/efa325e477cb",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-20220602172845-12108": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"297d7bd31939",
"functional-20220602172845-12108"
],
"NetworkID": "34bd2bef96a2e24112d476abd5ee49cf8b66ed7bdd21d8e661c89d34d79ecd9a",
"EndpointID": "9d193aac4f9f962d9bbecca538ea972a8c4eb5d12eb262da1c86516b2d609ae3",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220602172845-12108 -n functional-20220602172845-12108
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220602172845-12108 -n functional-20220602172845-12108: (6.4547321s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 logs -n 25: (8.1376039s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs:
-- stdout --
*
* ==> Audit <==
* |----------------|-----------------------------------------------------------------------------------------------------|---------------------------------|-------------------|----------------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|----------------|-----------------------------------------------------------------------------------------------------|---------------------------------|-------------------|----------------|---------------------|---------------------|
| cp | functional-20220602172845-12108 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:36 GMT | 02 Jun 22 17:36 GMT |
| | cp testdata\cp-test.txt | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | functional-20220602172845-12108 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:36 GMT | 02 Jun 22 17:36 GMT |
| | ssh -n | | | | | |
| | functional-20220602172845-12108 | | | | | |
| | sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | functional-20220602172845-12108 cp functional-20220602172845-12108:/home/docker/cp-test.txt | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:36 GMT | 02 Jun 22 17:36 GMT |
| | C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd2696599892\001\cp-test.txt | | | | | |
| ssh | functional-20220602172845-12108 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:36 GMT | 02 Jun 22 17:36 GMT |
| | ssh -n | | | | | |
| | functional-20220602172845-12108 | | | | | |
| | sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| image | functional-20220602172845-12108 image load --daemon | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:36 GMT | 02 Jun 22 17:36 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220602172845-12108 | | | | | |
| image | functional-20220602172845-12108 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:36 GMT | 02 Jun 22 17:36 GMT |
| | image ls | | | | | |
| image | functional-20220602172845-12108 image load --daemon | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:36 GMT | 02 Jun 22 17:37 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220602172845-12108 | | | | | |
| image | functional-20220602172845-12108 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
| | image ls | | | | | |
| image | functional-20220602172845-12108 image load --daemon | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220602172845-12108 | | | | | |
| update-context | functional-20220602172845-12108 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| image | functional-20220602172845-12108 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
| | image ls | | | | | |
| update-context | functional-20220602172845-12108 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-20220602172845-12108 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| image | functional-20220602172845-12108 image save | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220602172845-12108 | | | | | |
| | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar | | | | | |
| image | functional-20220602172845-12108 image rm | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220602172845-12108 | | | | | |
| image | functional-20220602172845-12108 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
| | image ls | | | | | |
| image | functional-20220602172845-12108 image load | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
| | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar | | | | | |
| image | functional-20220602172845-12108 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
| | image ls | | | | | |
| image | functional-20220602172845-12108 image save --daemon | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:38 GMT | 02 Jun 22 17:38 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220602172845-12108 | | | | | |
| image | functional-20220602172845-12108 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:38 GMT | 02 Jun 22 17:38 GMT |
| | image ls --format short | | | | | |
| image | functional-20220602172845-12108 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:38 GMT | 02 Jun 22 17:38 GMT |
| | image ls --format yaml | | | | | |
| image | functional-20220602172845-12108 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:38 GMT | 02 Jun 22 17:38 GMT |
| | image ls --format json | | | | | |
| image | functional-20220602172845-12108 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:38 GMT | 02 Jun 22 17:38 GMT |
| | image ls --format table | | | | | |
| image | functional-20220602172845-12108 image build -t | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:38 GMT | 02 Jun 22 17:38 GMT |
| | localhost/my-image:functional-20220602172845-12108 | | | | | |
| | testdata\build | | | | | |
| image | functional-20220602172845-12108 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:38 GMT | 02 Jun 22 17:38 GMT |
| | image ls | | | | | |
|----------------|-----------------------------------------------------------------------------------------------------|---------------------------------|-------------------|----------------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/06/02 17:35:35
Running on machine: minikube7
Binary: Built with gc go1.18.2 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0602 17:35:35.613351 12600 out.go:296] Setting OutFile to fd 672 ...
I0602 17:35:35.674499 12600 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0602 17:35:35.674499 12600 out.go:309] Setting ErrFile to fd 716...
I0602 17:35:35.674499 12600 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0602 17:35:35.687754 12600 out.go:303] Setting JSON to false
I0602 17:35:35.690043 12600 start.go:115] hostinfo: {"hostname":"minikube7","uptime":54477,"bootTime":1654136858,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
W0602 17:35:35.690043 12600 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0602 17:35:35.695305 12600 out.go:177] * [functional-20220602172845-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
I0602 17:35:35.698812 12600 out.go:177] - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
I0602 17:35:35.701911 12600 out.go:177] - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
I0602 17:35:35.704480 12600 out.go:177] - MINIKUBE_LOCATION=14269
I0602 17:35:35.706590 12600 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0602 17:35:35.710270 12600 config.go:178] Loaded profile config "functional-20220602172845-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0602 17:35:35.711373 12600 driver.go:358] Setting default libvirt URI to qemu:///system
I0602 17:35:38.327166 12600 docker.go:137] docker version: linux-20.10.16
I0602 17:35:38.334875 12600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0602 17:35:40.422069 12600 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0871845s)
I0602 17:35:40.423137 12600 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:54 SystemTime:2022-06-02 17:35:39.3867836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0602 17:35:40.427485 12600 out.go:177] * Using the docker driver based on existing profile
I0602 17:35:40.430176 12600 start.go:284] selected driver: docker
I0602 17:35:40.430176 12600 start.go:806] validating driver "docker" against &{Name:functional-20220602172845-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602172845-12108 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0602 17:35:40.430176 12600 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0602 17:35:40.451493 12600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0602 17:35:42.446735 12600 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9952326s)
I0602 17:35:42.446735 12600 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:54 SystemTime:2022-06-02 17:35:41.4676763 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0602 17:35:42.499902 12600 cni.go:95] Creating CNI manager for ""
I0602 17:35:42.499902 12600 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0602 17:35:42.499902 12600 start_flags.go:306] config:
{Name:functional-20220602172845-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602172845-12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true
storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0602 17:35:42.506382 12600 out.go:177] * dry-run validation complete!
*
* ==> Docker <==
* -- Logs begin at Thu 2022-06-02 17:29:39 UTC, end at Thu 2022-06-02 18:08:59 UTC. --
Jun 02 17:29:57 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:29:57.550276300Z" level=info msg="API listen on /var/run/docker.sock"
Jun 02 17:30:54 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:30:54.955361500Z" level=info msg="ignoring event" container=6109aec4f76c32123131a9950048e1b2680624bbf4f2abdb5fbbea382e2bae4d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:30:55 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:30:55.177866900Z" level=info msg="ignoring event" container=88afb34bb331664affe59a361dd5c3ffd9a2345ec5af59e7dba6ccda8c8d1c48 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:02 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:02.613117300Z" level=info msg="ignoring event" container=171d182f4c0e73a955e9602dfee0071f054fe96c2aa9893733f132b2184f8293 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:02 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:02.702451900Z" level=info msg="ignoring event" container=122eab7cec752e2c45bd0016a01e9a27f6528b4eb59b4f680d8675bce7493304 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:02 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:02.705199000Z" level=info msg="ignoring event" container=2726828644f24dff932f4d0c265809664fffb59ac10f4a95c942f4066e28b101 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:02 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:02.705370200Z" level=info msg="ignoring event" container=16005854d998fa81ab4593f0bce15225c7f2083fa4902bc0e14e03f6c097c550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:02 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:02.705979100Z" level=info msg="ignoring event" container=8e12b5d77efff8ad939f279ea7a58f375cd834d467f43d9a932fbbb6eba241bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:02 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:02.906285800Z" level=info msg="ignoring event" container=8aa623a7d4491f7f7814495e952d2fb1e0fdfc313d9828338842cea1a9776245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:03 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:03.108434200Z" level=info msg="ignoring event" container=34874a7d34918a2a4bde07146bbac312628fb19b7a87cf8c01bff98470ed82a4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:03 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:03.116174100Z" level=info msg="ignoring event" container=d7e206bfa6439da5e29e68d353cc7e4e602abe2aeba2680db66406b4c569691a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:03 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:03.116376600Z" level=info msg="ignoring event" container=dfdd7bb40a542103af6ae3c7e98c89a1b6933da85d4d129b174c905078a9f9e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:03 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:03.120470000Z" level=info msg="ignoring event" container=2efd45f063a22d30166d80b93a9dcb3574448b784412ebb9894b7b8406eddf81 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:03 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:03.317099900Z" level=info msg="ignoring event" container=8bd63a31a2d68a9f6d952871abbe1c2afdc2407cab5c2949a3f3eaa683d4aa18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:05 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:05.316362900Z" level=info msg="ignoring event" container=6d6e979e17aad4d6ea111c47ee5171316116c5131bf9c1a6f139c7c6da1f5d37 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:05 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:05.522881500Z" level=info msg="ignoring event" container=566cdb4a240568af0731c62e471d6bfcd1036a8daae1b5d140ac9189e3516227 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:06 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:06.314558700Z" level=info msg="ignoring event" container=21743903ddc531db44b257639c5ffeb72b81d8088d26b5cc139d3151c6a4a590 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:07 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:07.824308000Z" level=info msg="ignoring event" container=b733faa6cfc222aefb11ce7a89c72a66bb6ef2e7a3cb1d8937c69004fe03d2e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:19 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:19.700877400Z" level=info msg="ignoring event" container=e8d8f7baf75548076d13e1dc81c21b7ea1acf0614a5a2e7e4c6b7ab3ce860212 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:19 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:19.700934100Z" level=info msg="ignoring event" container=aa88977d90f779f4c40fa8b71870b33b10e4cc08f0e5aaef66f890e63beb7e35 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:33:27 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:27.052295500Z" level=info msg="ignoring event" container=73ee85e0c6100158f8e9b7c49ab364726698e2a51b0f019c174582fbe125e3c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:34:55 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:34:55.806788300Z" level=info msg="ignoring event" container=baa79e09163dca307a9db5772b47da0d18a764189d37889647f7bac4e80d89f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:34:57 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:34:57.403135400Z" level=info msg="ignoring event" container=dd6055af112ccf04b8d243661af45b2f91d1b43cb41a82081834267f758192a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:38:33 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:38:33.392965900Z" level=info msg="ignoring event" container=1beaa5aad4da9d8e9bec9e82bbad0b86b1c6cea2e2087a09fc88a649a957181c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:38:34 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:38:34.029319700Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
72a6bb0f9d4b5 mysql@sha256:7e99b2b8d5bca914ef31059858210f57b009c40375d647f0d4d65ecd01d6b1d5 32 minutes ago Running mysql 0 c334f32103670
fedf76bb319dd nginx@sha256:2bcabc23b45489fb0885d69a06ba1d648aeda973fae7bb981bafbb884165e514 33 minutes ago Running myfrontend 0 39c76780d10c2
2c4191d862c83 k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 34 minutes ago Running echoserver 0 8123fc8f742a2
149fe59761812 k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 34 minutes ago Running echoserver 0 560ad2624a500
aedbc0efe8bec nginx@sha256:a74534e76ee1121d418fa7394ca930eb67440deda413848bc67c68138535b989 34 minutes ago Running nginx 0 d14787edbecf3
1b0e88fdb16da df7b72818ad2e 35 minutes ago Running kube-controller-manager 2 1880164f78fb4
d626c0c1fde94 a4ca41631cc7a 35 minutes ago Running coredns 1 0eedc77f9c31e
b46fbda7e4d23 6e38f40d628db 35 minutes ago Running storage-provisioner 2 7846c2947960e
7ba303e0303c8 8fa62c12256df 35 minutes ago Running kube-apiserver 0 b22edf049daa2
764f22f755e2c 595f327f224a4 35 minutes ago Running kube-scheduler 1 877ddf09aad5a
9bb388ba532a5 4c03754524064 35 minutes ago Running kube-proxy 1 681f0a9852d79
e7d69d28699d3 25f8c7f3da61c 35 minutes ago Running etcd 1 1c13432e19bf8
21743903ddc53 6e38f40d628db 35 minutes ago Exited storage-provisioner 1 7846c2947960e
73ee85e0c6100 df7b72818ad2e 35 minutes ago Exited kube-controller-manager 1 1880164f78fb4
b733faa6cfc22 a4ca41631cc7a 38 minutes ago Exited coredns 0 34874a7d34918
2efd45f063a22 4c03754524064 38 minutes ago Exited kube-proxy 0 16005854d998f
6d6e979e17aad 595f327f224a4 38 minutes ago Exited kube-scheduler 0 8e12b5d77efff
8bd63a31a2d68 25f8c7f3da61c 38 minutes ago Exited etcd 0 dfdd7bb40a542
*
* ==> coredns [b733faa6cfc2] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
[INFO] Reloading complete
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> coredns [d626c0c1fde9] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> describe nodes <==
* Name: functional-20220602172845-12108
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-20220602172845-12108
kubernetes.io/os=linux
minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
minikube.k8s.io/name=functional-20220602172845-12108
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_06_02T17_30_29_0700
minikube.k8s.io/version=v1.26.0-beta.1
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 02 Jun 2022 17:30:24 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-20220602172845-12108
AcquireTime: <unset>
RenewTime: Thu, 02 Jun 2022 18:08:53 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 02 Jun 2022 18:04:21 +0000 Thu, 02 Jun 2022 17:30:21 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Jun 2022 18:04:21 +0000 Thu, 02 Jun 2022 17:30:21 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Jun 2022 18:04:21 +0000 Thu, 02 Jun 2022 17:30:21 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 02 Jun 2022 18:04:21 +0000 Thu, 02 Jun 2022 17:30:40 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-20220602172845-12108
Capacity:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
System Info:
Machine ID: a34bb2508bce429bb90502b0ef044420
System UUID: a34bb2508bce429bb90502b0ef044420
Boot ID: 174c87a1-4ba0-4f3f-a840-04757270163f
Kernel Version: 5.10.102.1-microsoft-standard-WSL2
OS Image: Ubuntu 20.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.16
Kubelet Version: v1.23.6
Kube-Proxy Version: v1.23.6
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-54fbb85-l5tpx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 34m
default hello-node-connect-74cf8bc446-qjhfg 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 34m
default mysql-b87c45988-mbb25 600m (3%!)(MISSING) 700m (4%!)(MISSING) 512Mi (0%!)(MISSING) 700Mi (1%!)(MISSING) 33m
default nginx-svc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 34m
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 34m
kube-system coredns-64897985d-xlttb 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 38m
kube-system etcd-functional-20220602172845-12108 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-apiserver-functional-20220602172845-12108 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35m
kube-system kube-controller-manager-functional-20220602172845-12108 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-proxy-qxvkt 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-scheduler-functional-20220602172845-12108 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (8%!)(MISSING) 700m (4%!)(MISSING)
memory 682Mi (1%!)(MISSING) 870Mi (1%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 35m kube-proxy
Normal Starting 38m kube-proxy
Normal NodeHasNoDiskPressure 38m (x4 over 38m) kubelet Node functional-20220602172845-12108 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 38m (x4 over 38m) kubelet Node functional-20220602172845-12108 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 38m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 38m (x4 over 38m) kubelet Node functional-20220602172845-12108 status is now: NodeHasSufficientMemory
Normal Starting 38m kubelet Starting kubelet.
Normal NodeHasNoDiskPressure 38m kubelet Node functional-20220602172845-12108 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 38m kubelet Node functional-20220602172845-12108 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 38m kubelet Node functional-20220602172845-12108 status is now: NodeHasSufficientMemory
Normal NodeAllocatableEnforced 38m kubelet Updated Node Allocatable limit across pods
Normal NodeReady 38m kubelet Node functional-20220602172845-12108 status is now: NodeReady
Normal Starting 35m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 35m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 35m (x7 over 35m) kubelet Node functional-20220602172845-12108 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 35m (x7 over 35m) kubelet Node functional-20220602172845-12108 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 35m (x7 over 35m) kubelet Node functional-20220602172845-12108 status is now: NodeHasSufficientPID
*
* ==> dmesg <==
* [Jun 2 17:43] WSL2: Performing memory compaction.
[Jun 2 17:44] WSL2: Performing memory compaction.
[Jun 2 17:45] WSL2: Performing memory compaction.
[Jun 2 17:46] WSL2: Performing memory compaction.
[Jun 2 17:47] WSL2: Performing memory compaction.
[Jun 2 17:48] WSL2: Performing memory compaction.
[Jun 2 17:49] WSL2: Performing memory compaction.
[Jun 2 17:50] WSL2: Performing memory compaction.
[Jun 2 17:51] WSL2: Performing memory compaction.
[Jun 2 17:52] WSL2: Performing memory compaction.
[Jun 2 17:53] WSL2: Performing memory compaction.
[Jun 2 17:54] WSL2: Performing memory compaction.
[Jun 2 17:55] WSL2: Performing memory compaction.
[Jun 2 17:56] WSL2: Performing memory compaction.
[Jun 2 17:57] WSL2: Performing memory compaction.
[Jun 2 17:58] WSL2: Performing memory compaction.
[Jun 2 17:59] WSL2: Performing memory compaction.
[Jun 2 18:00] WSL2: Performing memory compaction.
[Jun 2 18:01] WSL2: Performing memory compaction.
[Jun 2 18:02] WSL2: Performing memory compaction.
[Jun 2 18:04] WSL2: Performing memory compaction.
[Jun 2 18:05] WSL2: Performing memory compaction.
[Jun 2 18:06] WSL2: Performing memory compaction.
[Jun 2 18:07] WSL2: Performing memory compaction.
[Jun 2 18:08] WSL2: Performing memory compaction.
*
* ==> etcd [8bd63a31a2d6] <==
* {"level":"info","ts":"2022-06-02T17:30:18.622Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-06-02T17:30:18.623Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-06-02T17:30:18.623Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
{"level":"warn","ts":"2022-06-02T17:30:24.417Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"105.4264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
{"level":"warn","ts":"2022-06-02T17:30:24.418Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"105.5579ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:352"}
{"level":"info","ts":"2022-06-02T17:30:24.418Z","caller":"traceutil/trace.go:171","msg":"trace[1774619180] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:29; }","duration":"105.6827ms","start":"2022-06-02T17:30:24.312Z","end":"2022-06-02T17:30:24.418Z","steps":["trace[1774619180] 'agreement among raft nodes before linearized reading' (duration: 22.3896ms)","trace[1774619180] 'range keys from in-memory index tree' (duration: 83.1483ms)"],"step_count":2}
{"level":"info","ts":"2022-06-02T17:30:24.418Z","caller":"traceutil/trace.go:171","msg":"trace[570474075] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:29; }","duration":"105.664ms","start":"2022-06-02T17:30:24.312Z","end":"2022-06-02T17:30:24.418Z","steps":["trace[570474075] 'agreement among raft nodes before linearized reading' (duration: 22.3676ms)","trace[570474075] 'range keys from in-memory index tree' (duration: 83.0427ms)"],"step_count":2}
{"level":"warn","ts":"2022-06-02T17:30:24.418Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.7701ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:119"}
{"level":"warn","ts":"2022-06-02T17:30:24.418Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.3459ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/system-leader-election\" ","response":"range_response_count:0 size:4"}
{"level":"info","ts":"2022-06-02T17:30:24.418Z","caller":"traceutil/trace.go:171","msg":"trace[1083361185] range","detail":"{range_begin:/registry/flowschemas/system-leader-election; range_end:; response_count:0; response_revision:29; }","duration":"104.4003ms","start":"2022-06-02T17:30:24.313Z","end":"2022-06-02T17:30:24.418Z","steps":["trace[1083361185] 'agreement among raft nodes before linearized reading' (duration: 20.9559ms)","trace[1083361185] 'range keys from in-memory index tree' (duration: 83.3714ms)"],"step_count":2}
{"level":"info","ts":"2022-06-02T17:30:24.418Z","caller":"traceutil/trace.go:171","msg":"trace[504897351] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:29; }","duration":"106.9576ms","start":"2022-06-02T17:30:24.311Z","end":"2022-06-02T17:30:24.418Z","steps":["trace[504897351] 'agreement among raft nodes before linearized reading' (duration: 23.5141ms)","trace[504897351] 'range keys from in-memory index tree' (duration: 83.1862ms)"],"step_count":2}
{"level":"warn","ts":"2022-06-02T17:30:24.418Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.6995ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:114"}
{"level":"info","ts":"2022-06-02T17:30:24.418Z","caller":"traceutil/trace.go:171","msg":"trace[2016913714] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:29; }","duration":"107.2084ms","start":"2022-06-02T17:30:24.311Z","end":"2022-06-02T17:30:24.418Z","steps":["trace[2016913714] 'agreement among raft nodes before linearized reading' (duration: 23.4617ms)","trace[2016913714] 'range keys from in-memory index tree' (duration: 83.1516ms)"],"step_count":2}
{"level":"info","ts":"2022-06-02T17:30:41.321Z","caller":"traceutil/trace.go:171","msg":"trace[1401755233] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"113.6492ms","start":"2022-06-02T17:30:41.207Z","end":"2022-06-02T17:30:41.321Z","steps":["trace[1401755233] 'process raft request' (duration: 93.9624ms)","trace[1401755233] 'compare' (duration: 18.93ms)"],"step_count":2}
{"level":"info","ts":"2022-06-02T17:30:46.824Z","caller":"traceutil/trace.go:171","msg":"trace[909353217] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"110.7065ms","start":"2022-06-02T17:30:46.713Z","end":"2022-06-02T17:30:46.824Z","steps":["trace[909353217] 'process raft request' (duration: 87.5269ms)","trace[909353217] 'compare' (duration: 22.965ms)"],"step_count":2}
{"level":"info","ts":"2022-06-02T17:30:56.311Z","caller":"traceutil/trace.go:171","msg":"trace[2109093670] transaction","detail":"{read_only:false; response_revision:495; number_of_response:1; }","duration":"104.4418ms","start":"2022-06-02T17:30:56.206Z","end":"2022-06-02T17:30:56.311Z","steps":["trace[2109093670] 'process raft request' (duration: 104.3814ms)"],"step_count":1}
{"level":"info","ts":"2022-06-02T17:30:56.311Z","caller":"traceutil/trace.go:171","msg":"trace[1922025376] transaction","detail":"{read_only:false; number_of_response:1; response_revision:494; }","duration":"110.4308ms","start":"2022-06-02T17:30:56.201Z","end":"2022-06-02T17:30:56.311Z","steps":["trace[1922025376] 'compare' (duration: 87.4346ms)"],"step_count":1}
{"level":"info","ts":"2022-06-02T17:33:02.802Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-06-02T17:33:02.802Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220602172845-12108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
WARNING: 2022/06/02 17:33:02 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/06/02 17:33:03 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-06-02T17:33:03.003Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2022-06-02T17:33:03.013Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-06-02T17:33:03.016Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-06-02T17:33:03.016Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220602172845-12108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
*
* ==> etcd [e7d69d28699d] <==
* {"level":"warn","ts":"2022-06-02T17:36:25.810Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-02T17:36:25.224Z","time spent":"586.2614ms","remote":"127.0.0.1:34670","response type":"/etcdserverpb.KV/Range","request count":0,"request size":35,"response count":1,"response size":427,"request content":"key:\"/registry/ranges/servicenodeports\" "}
{"level":"info","ts":"2022-06-02T17:36:38.369Z","caller":"traceutil/trace.go:171","msg":"trace[1191981672] linearizableReadLoop","detail":"{readStateIndex:1020; appliedIndex:1020; }","duration":"116.746ms","start":"2022-06-02T17:36:38.252Z","end":"2022-06-02T17:36:38.369Z","steps":["trace[1191981672] 'read index received' (duration: 116.7355ms)","trace[1191981672] 'applied index is now lower than readState.Index' (duration: 7.6µs)"],"step_count":2}
{"level":"warn","ts":"2022-06-02T17:36:38.398Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"145.8866ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-06-02T17:36:38.398Z","caller":"traceutil/trace.go:171","msg":"trace[1799755926] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:913; }","duration":"146.0811ms","start":"2022-06-02T17:36:38.252Z","end":"2022-06-02T17:36:38.398Z","steps":["trace[1799755926] 'agreement among raft nodes before linearized reading' (duration: 117.0069ms)","trace[1799755926] 'count revisions from in-memory index tree' (duration: 28.8544ms)"],"step_count":2}
{"level":"warn","ts":"2022-06-02T17:36:38.398Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"119.1687ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-06-02T17:36:38.398Z","caller":"traceutil/trace.go:171","msg":"trace[1421134268] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:913; }","duration":"119.2197ms","start":"2022-06-02T17:36:38.279Z","end":"2022-06-02T17:36:38.398Z","steps":["trace[1421134268] 'agreement among raft nodes before linearized reading' (duration: 90.136ms)","trace[1421134268] 'count revisions from in-memory index tree' (duration: 29.0158ms)"],"step_count":2}
{"level":"info","ts":"2022-06-02T17:36:57.588Z","caller":"traceutil/trace.go:171","msg":"trace[589399620] linearizableReadLoop","detail":"{readStateIndex:1038; appliedIndex:1037; }","duration":"184.7549ms","start":"2022-06-02T17:36:57.403Z","end":"2022-06-02T17:36:57.588Z","steps":["trace[589399620] 'read index received' (duration: 183.4298ms)","trace[589399620] 'applied index is now lower than readState.Index' (duration: 1.321ms)"],"step_count":2}
{"level":"info","ts":"2022-06-02T17:36:57.588Z","caller":"traceutil/trace.go:171","msg":"trace[63869590] transaction","detail":"{read_only:false; response_revision:927; number_of_response:1; }","duration":"446.2825ms","start":"2022-06-02T17:36:57.142Z","end":"2022-06-02T17:36:57.588Z","steps":["trace[63869590] 'process raft request' (duration: 444.8829ms)"],"step_count":1}
{"level":"warn","ts":"2022-06-02T17:36:57.588Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-02T17:36:57.142Z","time spent":"446.3547ms","remote":"127.0.0.1:34636","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:919 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128013425464041231 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >"}
{"level":"warn","ts":"2022-06-02T17:36:57.589Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"185.2684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-06-02T17:36:57.589Z","caller":"traceutil/trace.go:171","msg":"trace[1713815688] range","detail":"{range_begin:/registry/ingress/; range_end:/registry/ingress0; response_count:0; response_revision:927; }","duration":"185.4318ms","start":"2022-06-02T17:36:57.403Z","end":"2022-06-02T17:36:57.589Z","steps":["trace[1713815688] 'agreement among raft nodes before linearized reading' (duration: 185.0151ms)"],"step_count":1}
{"level":"warn","ts":"2022-06-02T17:37:37.122Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.4254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-06-02T17:37:37.123Z","caller":"traceutil/trace.go:171","msg":"trace[1947285661] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:955; }","duration":"108.6294ms","start":"2022-06-02T17:37:37.014Z","end":"2022-06-02T17:37:37.123Z","steps":["trace[1947285661] 'range keys from in-memory index tree' (duration: 108.3134ms)"],"step_count":1}
{"level":"info","ts":"2022-06-02T17:43:20.949Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":988}
{"level":"info","ts":"2022-06-02T17:43:20.951Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":988,"took":"1.3657ms"}
{"level":"info","ts":"2022-06-02T17:48:20.967Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1198}
{"level":"info","ts":"2022-06-02T17:48:20.968Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1198,"took":"798.2µs"}
{"level":"info","ts":"2022-06-02T17:53:20.985Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1408}
{"level":"info","ts":"2022-06-02T17:53:20.986Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1408,"took":"663.2µs"}
{"level":"info","ts":"2022-06-02T17:58:21.002Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1618}
{"level":"info","ts":"2022-06-02T17:58:21.004Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1618,"took":"796.3µs"}
{"level":"info","ts":"2022-06-02T18:03:21.020Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1828}
{"level":"info","ts":"2022-06-02T18:03:21.021Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1828,"took":"934.8µs"}
{"level":"info","ts":"2022-06-02T18:08:21.035Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2038}
{"level":"info","ts":"2022-06-02T18:08:21.036Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2038,"took":"634.6µs"}
*
* ==> kernel <==
* 18:09:00 up 58 min, 0 users, load average: 0.42, 0.32, 0.45
Linux functional-20220602172845-12108 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.4 LTS"
*
* ==> kube-apiserver [7ba303e0303c] <==
* I0602 17:34:36.903145 1 trace.go:205] Trace[497186442]: "Update" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:3e2988d3-a7d8-4081-861e-249452a4eb8e,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (02-Jun-2022 17:34:35.815) (total time: 1087ms):
Trace[497186442]: ---"Object stored in database" 999ms (17:34:36.902)
Trace[497186442]: [1.087817s] [1.087817s] END
I0602 17:34:36.903208 1 trace.go:205] Trace[148171019]: "Update" url:/apis/apps/v1/namespaces/default/deployments/hello-node/status,user-agent:kube-controller-manager/v1.23.6 (linux/amd64) kubernetes/ad33385/system:serviceaccount:kube-system:deployment-controller,audit-id:f94b95c4-3f05-4b8e-918f-cce13fe5f05b,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (02-Jun-2022 17:34:35.905) (total time: 998ms):
Trace[148171019]: ---"Object stored in database" 997ms (17:34:36.903)
Trace[148171019]: [998.0108ms] [998.0108ms] END
I0602 17:34:36.915213 1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.99.217.46]
I0602 17:34:36.915418 1 trace.go:205] Trace[658946329]: "Create" url:/api/v1/namespaces/default/services,user-agent:kubectl.exe/v1.18.2 (windows/amd64) kubernetes/52c56ce,audit-id:0b497938-188e-4c38-95e9-458b54ccdad6,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (02-Jun-2022 17:34:35.803) (total time: 1111ms):
Trace[658946329]: ---"Object stored in database" 1111ms (17:34:36.915)
Trace[658946329]: [1.1115856s] [1.1115856s] END
I0602 17:35:43.105842 1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.104.243.176]
I0602 17:36:25.810547 1 trace.go:205] Trace[831226069]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:aad7c730-33b7-4c99-9a3e-889e24e3eb41,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (02-Jun-2022 17:36:24.424) (total time: 1385ms):
Trace[831226069]: ---"About to write a response" 1385ms (17:36:25.810)
Trace[831226069]: [1.3856389s] [1.3856389s] END
I0602 17:36:25.810764 1 trace.go:205] Trace[1244335267]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (02-Jun-2022 17:36:24.222) (total time: 1587ms):
Trace[1244335267]: [1.5879435s] [1.5879435s] END
I0602 17:36:25.810546 1 trace.go:205] Trace[868752966]: "Get" url:/api/v1/namespaces/default/services/nginx-svc,user-agent:kubectl.exe/v1.18.2 (windows/amd64) kubernetes/52c56ce,audit-id:36708dc3-e450-488c-861d-92bd08e5b67e,client:192.168.49.1,accept:application/json,protocol:HTTP/2.0 (02-Jun-2022 17:36:23.859) (total time: 1950ms):
Trace[868752966]: ---"About to write a response" 1950ms (17:36:25.810)
Trace[868752966]: [1.9506941s] [1.9506941s] END
I0602 17:36:25.811631 1 trace.go:205] Trace[451222435]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:1f73a754-efc9-495b-b471-dbbf00a8de8e,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (02-Jun-2022 17:36:24.222) (total time: 1588ms):
Trace[451222435]: ---"Listing from storage done" 1588ms (17:36:25.810)
Trace[451222435]: [1.5888983s] [1.5888983s] END
W0602 17:46:16.623442 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
W0602 17:55:03.636337 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
W0602 18:04:29.873047 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
*
* ==> kube-controller-manager [1b0e88fdb16d] <==
* I0602 17:33:40.603736 1 shared_informer.go:247] Caches are synced for persistent volume
I0602 17:33:40.607680 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0602 17:33:40.610220 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
I0602 17:33:40.610383 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
I0602 17:33:40.612008 1 shared_informer.go:247] Caches are synced for PVC protection
I0602 17:33:40.612192 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
I0602 17:33:40.620002 1 shared_informer.go:247] Caches are synced for resource quota
I0602 17:33:40.629084 1 shared_informer.go:247] Caches are synced for cronjob
I0602 17:33:40.702120 1 shared_informer.go:247] Caches are synced for TTL after finished
I0602 17:33:40.702270 1 shared_informer.go:247] Caches are synced for service account
I0602 17:33:40.702826 1 shared_informer.go:247] Caches are synced for job
I0602 17:33:40.703753 1 shared_informer.go:247] Caches are synced for namespace
I0602 17:33:40.704838 1 shared_informer.go:247] Caches are synced for resource quota
I0602 17:33:40.716759 1 shared_informer.go:247] Caches are synced for attach detach
I0602 17:33:41.108301 1 shared_informer.go:247] Caches are synced for garbage collector
I0602 17:33:41.180581 1 shared_informer.go:247] Caches are synced for garbage collector
I0602 17:33:41.180757 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0602 17:34:25.073098 1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0602 17:34:25.073243 1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0602 17:34:34.703002 1 event.go:294] "Event occurred" object="default/hello-node-connect" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-74cf8bc446 to 1"
I0602 17:34:34.904761 1 event.go:294] "Event occurred" object="default/hello-node-connect-74cf8bc446" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-74cf8bc446-qjhfg"
I0602 17:34:35.304018 1 event.go:294] "Event occurred" object="default/hello-node" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-54fbb85 to 1"
I0602 17:34:35.409147 1 event.go:294] "Event occurred" object="default/hello-node-54fbb85" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-54fbb85-l5tpx"
I0602 17:35:43.212131 1 event.go:294] "Event occurred" object="default/mysql" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-b87c45988 to 1"
I0602 17:35:43.324203 1 event.go:294] "Event occurred" object="default/mysql-b87c45988" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-b87c45988-mbb25"
*
* ==> kube-controller-manager [73ee85e0c610] <==
* /usr/local/go/src/bytes/buffer.go:204 +0x98
crypto/tls.(*Conn).readFromUntil(0xc000130e00, {0x4d51100, 0xc0005a8058}, 0x8ef)
/usr/local/go/src/crypto/tls/conn.go:799 +0xe5
crypto/tls.(*Conn).readRecordOrCCS(0xc000130e00, 0x0)
/usr/local/go/src/crypto/tls/conn.go:606 +0x112
crypto/tls.(*Conn).readRecord(...)
/usr/local/go/src/crypto/tls/conn.go:574
crypto/tls.(*Conn).Read(0xc000130e00, {0xc000d32000, 0x1000, 0x919560})
/usr/local/go/src/crypto/tls/conn.go:1277 +0x16f
bufio.(*Reader).Read(0xc00017b620, {0xc0000e70e0, 0x9, 0x934bc2})
/usr/local/go/src/bufio/bufio.go:227 +0x1b4
io.ReadAtLeast({0x4d48ae0, 0xc00017b620}, {0xc0000e70e0, 0x9, 0x9}, 0x9)
/usr/local/go/src/io/io.go:328 +0x9a
io.ReadFull(...)
/usr/local/go/src/io/io.go:347
k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader({0xc0000e70e0, 0x9, 0xc001d074a0}, {0x4d48ae0, 0xc00017b620})
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x6e
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0000e70a0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:498 +0x95
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000a5ff98)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:2101 +0x130
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc0009e3b00)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1997 +0x6f
created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:725 +0xac5
*
* ==> kube-proxy [2efd45f063a2] <==
* E0602 17:30:45.022452 1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
I0602 17:30:45.106745 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I0602 17:30:45.110103 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I0602 17:30:45.114380 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0602 17:30:45.118433 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0602 17:30:45.123477 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
I0602 17:30:45.316774 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I0602 17:30:45.316884 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I0602 17:30:45.317007 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0602 17:30:45.521001 1 server_others.go:206] "Using iptables Proxier"
I0602 17:30:45.521138 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0602 17:30:45.521161 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0602 17:30:45.521203 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0602 17:30:45.522484 1 server.go:656] "Version info" version="v1.23.6"
I0602 17:30:45.523620 1 config.go:317] "Starting service config controller"
I0602 17:30:45.523766 1 shared_informer.go:240] Waiting for caches to sync for service config
I0602 17:30:45.523678 1 config.go:226] "Starting endpoint slice config controller"
I0602 17:30:45.523980 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0602 17:30:45.701703 1 shared_informer.go:247] Caches are synced for service config
I0602 17:30:45.701735 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-proxy [9bb388ba532a] <==
* E0602 17:33:06.906472 1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
I0602 17:33:06.910503 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I0602 17:33:06.913742 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I0602 17:33:06.916532 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0602 17:33:06.919736 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0602 17:33:06.922743 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
E0602 17:33:06.926472 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220602172845-12108": dial tcp 192.168.49.2:8441: connect: connection refused
E0602 17:33:08.026457 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220602172845-12108": dial tcp 192.168.49.2:8441: connect: connection refused
I0602 17:33:17.209151 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I0602 17:33:17.209273 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I0602 17:33:17.209436 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0602 17:33:17.614819 1 server_others.go:206] "Using iptables Proxier"
I0602 17:33:17.615238 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0602 17:33:17.615264 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0602 17:33:17.615346 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0602 17:33:17.617852 1 server.go:656] "Version info" version="v1.23.6"
I0602 17:33:17.620806 1 config.go:317] "Starting service config controller"
I0602 17:33:17.621124 1 shared_informer.go:240] Waiting for caches to sync for service config
I0602 17:33:17.620923 1 config.go:226] "Starting endpoint slice config controller"
I0602 17:33:17.621169 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0602 17:33:17.722650 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0602 17:33:17.722781 1 shared_informer.go:247] Caches are synced for service config
*
* ==> kube-scheduler [6d6e979e17aa] <==
* E0602 17:30:25.304284 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0602 17:30:25.304318 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0602 17:30:25.304375 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0602 17:30:25.304437 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0602 17:30:25.304461 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0602 17:30:25.305411 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0602 17:30:25.305524 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0602 17:30:25.356211 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0602 17:30:25.356341 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0602 17:30:25.404116 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0602 17:30:25.404165 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0602 17:30:25.463029 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0602 17:30:25.463183 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0602 17:30:25.563238 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0602 17:30:25.563480 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0602 17:30:25.603459 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0602 17:30:25.603630 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0602 17:30:25.603635 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0602 17:30:25.603660 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0602 17:30:25.704167 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0602 17:30:25.704272 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I0602 17:30:28.220787 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0602 17:33:02.803787 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0602 17:33:02.805491 1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
I0602 17:33:02.805655 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
*
* ==> kube-scheduler [764f22f755e2] <==
* W0602 17:33:17.003203 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0602 17:33:17.003243 1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0602 17:33:17.003259 1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
W0602 17:33:17.003275 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0602 17:33:17.115792 1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
I0602 17:33:17.203043 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
I0602 17:33:17.203316 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0602 17:33:17.203081 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0602 17:33:17.203450 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0602 17:33:17.303702 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0602 17:33:25.009877 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
E0602 17:33:25.010971 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
E0602 17:33:25.011203 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
E0602 17:33:25.011624 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
E0602 17:33:25.011704 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
E0602 17:33:25.011830 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
E0602 17:33:25.012819 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
E0602 17:33:25.012873 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
E0602 17:33:25.013010 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
E0602 17:33:25.013260 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: unknown (get namespaces)
E0602 17:33:25.101887 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: unknown (get configmaps)
E0602 17:33:25.102017 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
E0602 17:33:25.102079 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
E0602 17:33:25.102122 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
E0602 17:33:25.102323 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
*
* ==> kubelet <==
* -- Logs begin at Thu 2022-06-02 17:29:39 UTC, end at Thu 2022-06-02 18:09:01 UTC. --
Jun 02 17:35:00 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:00.426051 6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-connect-74cf8bc446-qjhfg through plugin: invalid network status for"
Jun 02 17:35:00 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:00.604378 6098 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-309b9820-d9d6-4da7-8d9a-107aeedb3011\" (UniqueName: \"kubernetes.io/host-path/54ac2c9b-7834-43cc-9659-4796f4b3a5c4-pvc-309b9820-d9d6-4da7-8d9a-107aeedb3011\") pod \"sp-pod\" (UID: \"54ac2c9b-7834-43cc-9659-4796f4b3a5c4\") " pod="default/sp-pod"
Jun 02 17:35:00 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:00.604609 6098 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jmlc\" (UniqueName: \"kubernetes.io/projected/54ac2c9b-7834-43cc-9659-4796f4b3a5c4-kube-api-access-7jmlc\") pod \"sp-pod\" (UID: \"54ac2c9b-7834-43cc-9659-4796f4b3a5c4\") " pod="default/sp-pod"
Jun 02 17:35:01 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:01.304256 6098 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=13c07143-bcda-415b-987d-4813238cdbe3 path="/var/lib/kubelet/pods/13c07143-bcda-415b-987d-4813238cdbe3/volumes"
Jun 02 17:35:01 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:01.730504 6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
Jun 02 17:35:01 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:01.743742 6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-54fbb85-l5tpx through plugin: invalid network status for"
Jun 02 17:35:01 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:01.806854 6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-connect-74cf8bc446-qjhfg through plugin: invalid network status for"
Jun 02 17:35:01 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:01.821523 6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
Jun 02 17:35:01 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:01.823851 6098 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="39c76780d10c2c949604b78c48825227602add04c93a94e540ddc889d5416150"
Jun 02 17:35:02 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:02.840034 6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
Jun 02 17:35:03 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:03.878648 6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
Jun 02 17:35:43 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:43.331556 6098 topology_manager.go:200] "Topology Admit Handler"
Jun 02 17:35:43 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:43.503528 6098 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvf5z\" (UniqueName: \"kubernetes.io/projected/f72f89e4-b36a-4fdd-8ebe-b615c45f18a4-kube-api-access-hvf5z\") pod \"mysql-b87c45988-mbb25\" (UID: \"f72f89e4-b36a-4fdd-8ebe-b615c45f18a4\") " pod="default/mysql-b87c45988-mbb25"
Jun 02 17:35:44 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:44.554683 6098 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c334f32103670ae236edfaa2a0bdf63555e49e3874fa38d16d17e4c30c462e64"
Jun 02 17:35:44 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:44.555557 6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-mbb25 through plugin: invalid network status for"
Jun 02 17:35:45 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:45.573025 6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-mbb25 through plugin: invalid network status for"
Jun 02 17:36:25 functional-20220602172845-12108 kubelet[6098]: I0602 17:36:25.975103 6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-mbb25 through plugin: invalid network status for"
Jun 02 17:36:27 functional-20220602172845-12108 kubelet[6098]: I0602 17:36:27.355554 6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-mbb25 through plugin: invalid network status for"
Jun 02 17:38:16 functional-20220602172845-12108 kubelet[6098]: W0602 17:38:16.028083 6098 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 02 17:43:16 functional-20220602172845-12108 kubelet[6098]: W0602 17:43:16.023590 6098 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 02 17:48:16 functional-20220602172845-12108 kubelet[6098]: W0602 17:48:16.025281 6098 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 02 17:53:16 functional-20220602172845-12108 kubelet[6098]: W0602 17:53:16.026511 6098 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 02 17:58:16 functional-20220602172845-12108 kubelet[6098]: W0602 17:58:16.027619 6098 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 02 18:03:16 functional-20220602172845-12108 kubelet[6098]: W0602 18:03:16.029803 6098 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 02 18:08:16 functional-20220602172845-12108 kubelet[6098]: W0602 18:08:16.030301 6098 sysinfo.go:203] Nodes topology is not available, providing CPU topology
*
* ==> storage-provisioner [21743903ddc5] <==
* I0602 17:33:06.121988 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0602 17:33:06.201342 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
*
* ==> storage-provisioner [b46fbda7e4d2] <==
* I0602 17:33:20.522005 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0602 17:33:25.113668 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0602 17:33:25.113880 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0602 17:33:42.660204 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0602 17:33:42.660586 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220602172845-12108_c1102dcb-2ca9-47cd-ae2b-4d0e28cc1795!
I0602 17:33:42.660584 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81205cc1-768b-42f4-93e6-bb23e91e5f2d", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220602172845-12108_c1102dcb-2ca9-47cd-ae2b-4d0e28cc1795 became leader
I0602 17:33:42.761527 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220602172845-12108_c1102dcb-2ca9-47cd-ae2b-4d0e28cc1795!
I0602 17:34:25.102478 1 controller.go:1332] provision "default/myclaim" class "standard": started
I0602 17:34:25.102841 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard 08a31009-78ce-400c-b4e4-386a272ea447 464 0 2022-06-02 17:30:49 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-06-02 17:30:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-309b9820-d9d6-4da7-8d9a-107aeedb3011 &PersistentVolumeClaim{ObjectMeta:{myclaim default 309b9820-d9d6-4da7-8d9a-107aeedb3011 706 0 2022-06-02 17:34:25 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kubectl.exe Update v1 2022-06-02 17:34:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}} {kube-controller-manager Update v1 2022-06-02 17:34:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I0602 17:34:25.103415 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"309b9820-d9d6-4da7-8d9a-107aeedb3011", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I0602 17:34:25.104060 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-309b9820-d9d6-4da7-8d9a-107aeedb3011" provisioned
I0602 17:34:25.104226 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I0602 17:34:25.104239 1 volume_store.go:212] Trying to save persistentvolume "pvc-309b9820-d9d6-4da7-8d9a-107aeedb3011"
I0602 17:34:25.122669 1 volume_store.go:219] persistentvolume "pvc-309b9820-d9d6-4da7-8d9a-107aeedb3011" saved
I0602 17:34:25.123421 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"309b9820-d9d6-4da7-8d9a-107aeedb3011", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-309b9820-d9d6-4da7-8d9a-107aeedb3011
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220602172845-12108 -n functional-20220602172845-12108
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220602172845-12108 -n functional-20220602172845-12108: (6.3352679s)
helpers_test.go:261: (dbg) Run: kubectl --context functional-20220602172845-12108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context functional-20220602172845-12108 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-20220602172845-12108 describe pod : exit status 1 (198.5873ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context functional-20220602172845-12108 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (2073.94s)