=== RUN TestFunctional/serial/ComponentHealth
functional_test.go:886: (dbg) Run: kubectl --context functional-20210915172222-22677 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:900: etcd phase: Running
functional_test.go:908: etcd is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2021-09-15 17:22:43 +0000 UTC ContainerStatuses:[{Name:etcd State:{Waiting:<nil> Running:0xc001097d88 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc000484000} Ready:false RestartCount:1 Image:k8s.gcr.io/etcd:3.5.0-0 ImageID:docker-pullable://k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d ContainerID:docker://a2ab135084448aa6d636900ab1f0540b9e9193ae0e76bd37c4c8ae4bea61a41f}]}
functional_test.go:900: kube-apiserver phase: Running
functional_test.go:908: kube-apiserver is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2021-09-15 17:22:43 +0000 UTC ContainerStatuses:[{Name:kube-apiserver State:{Waiting:<nil> Running:<nil> Terminated:0xc000484070} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-apiserver:v1.22.1 ImageID:docker-pullable://k8s.gcr.io/kube-apiserver@sha256:6862d5a70cea8f3ef49213d6a36b7bfbbf90f99fb37f7124505be55f0ef51364 ContainerID:docker://6c4926bac9cb53123ce965cef87735685a88279ee8ff43276da6d4c5d7b5dfc0}]}
functional_test.go:900: kube-controller-manager phase: Running
functional_test.go:910: kube-controller-manager status: Ready
functional_test.go:900: kube-scheduler phase: Running
functional_test.go:908: kube-scheduler is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2021-09-15 17:22:43 +0000 UTC ContainerStatuses:[{Name:kube-scheduler State:{Waiting:<nil> Running:0xc000160498 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0004840e0} Ready:false RestartCount:1 Image:k8s.gcr.io/kube-scheduler:v1.22.1 ImageID:docker-pullable://k8s.gcr.io/kube-scheduler@sha256:e1a999694bf4b9198bc220216680ef651fabe406445a93c2d354f9dd7e53c1fd ContainerID:docker://573579ce53ca497923a54aa7891136829d8a1830572fb9f1a112d2c8064cbf46}]}
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======> post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:232: (dbg) Run: docker inspect functional-20210915172222-22677
helpers_test.go:236: (dbg) docker inspect functional-20210915172222-22677:
-- stdout --
[
{
"Id": "aaeb7540ca0738947213956ba1f9dc5b08b1fc8fc8db458a131734894867e5e7",
"Created": "2021-09-15T17:22:24.298109722Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 49020,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-09-15T17:22:24.759280979Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
"ResolvConfPath": "/var/lib/docker/containers/aaeb7540ca0738947213956ba1f9dc5b08b1fc8fc8db458a131734894867e5e7/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/aaeb7540ca0738947213956ba1f9dc5b08b1fc8fc8db458a131734894867e5e7/hostname",
"HostsPath": "/var/lib/docker/containers/aaeb7540ca0738947213956ba1f9dc5b08b1fc8fc8db458a131734894867e5e7/hosts",
"LogPath": "/var/lib/docker/containers/aaeb7540ca0738947213956ba1f9dc5b08b1fc8fc8db458a131734894867e5e7/aaeb7540ca0738947213956ba1f9dc5b08b1fc8fc8db458a131734894867e5e7-json.log",
"Name": "/functional-20210915172222-22677",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-20210915172222-22677:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-20210915172222-22677",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [
{
"PathOnHost": "/dev/fuse",
"PathInContainer": "/dev/fuse",
"CgroupPermissions": "rwm"
}
],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/620d3c53f1c4fdcadb25cff396b74db4be0b1ed40a24640bdb9ec0022f02f474-init/diff:/var/lib/docker/overlay2/0f8a10c746fedd8d666b9dae9bdd481a4ecb3bdc1c19c1763f5067cb6f6ec480/diff:/var/lib/docker/overlay2/f54dd51211f7fd37ed9e7e1b70117531d83c45068e0d908bb274324aaab9559e/diff:/var/lib/docker/overlay2/db8bcdf3259b37def0e8daacd8a4b91582e91b1335c4898bead8f0570ee379f4/diff:/var/lib/docker/overlay2/38faf25fee6b5c923ea643302bba2da4194356f1ecd1442a2371c90afb025b10/diff:/var/lib/docker/overlay2/c4a1a3e49e5edffd417be1158a98868c52bdfc84e62524f2c795e427d987e984/diff:/var/lib/docker/overlay2/e68c2d7fd8912bc6f5e5c4fd63053076de99a4d27b5ac2d087e97b170c724fbd/diff:/var/lib/docker/overlay2/c74734152182eae7a2408058cc612acaac307696039525cbf809638bdc5f0291/diff:/var/lib/docker/overlay2/871cb5ebce097dd4aeeed897eae2f2516c492acd2cf20e22a9bd621400d03fab/diff:/var/lib/docker/overlay2/c6de31e81e924f091a3674cc807ae856aaaf0bd75eede35b8012e555a78482f8/diff:/var/lib/docker/overlay2/5cb89b
dcb371bd4714cf4cc592534e708077fc8ba74ac5611767920d2e72b515/diff:/var/lib/docker/overlay2/855cba59b338da688aa696b251e03ec2f0d9dc95537b26265f31042b64dbcc2d/diff:/var/lib/docker/overlay2/68e4b369a0e1b1a42b5b9874ee9fa0849598643096f78bd2ff9f86e7f0dee281/diff:/var/lib/docker/overlay2/7d9c9be2ec55a3cfaeb8e493bde1f09d3e07500aa41b31d9dc92e3a045bfaa88/diff:/var/lib/docker/overlay2/e229f9da50c177e6addfce418c74667438f518d5821a0f714cb58a3968830c71/diff:/var/lib/docker/overlay2/ef41fa20aaf0f3e8b0f0873abea368bc5dfaf6684783ca409e3d6dfa74ca2b33/diff:/var/lib/docker/overlay2/8160677224d3d0ba75ceac5e22adbe4420f58aa9df575a4fa1eb16cfa5d36cb1/diff:/var/lib/docker/overlay2/747c3fc9ab1ea2616d662bd3b2e2f1ddbdac907a6eea86ac017e9ce3e66dc4d9/diff:/var/lib/docker/overlay2/f8eb124418298912c3691320300767eabaeae37f352f56c2d0e5a40dd97b0bc2/diff:/var/lib/docker/overlay2/c37f4c78c3026a8785fb4aca9fa78dc5ac21307cb3bfbf746bacec078e037770/diff:/var/lib/docker/overlay2/ee71c4960738cdf41b558b5aa16215f18299ecf0b241560a77477f080674ba8c/diff:/var/lib/d
ocker/overlay2/f2132a0cc49a4383ce675ba6870120bd2c58077afad82ab90d066c0651128e1d/diff:/var/lib/docker/overlay2/44200c5ed83b044737127b23b0b15c55e21152acc08ba98910e0dcb93835c4d5/diff:/var/lib/docker/overlay2/22bce5e99989e32cb7a1f876e412e03427623b5cb70e4709d0336c2cae5f4b37/diff:/var/lib/docker/overlay2/ccfb925585c3768c7aba8e731873849181bedce4d441b3095c1abfd310c405e1/diff:/var/lib/docker/overlay2/a90553cf4067ab91ee5ed23daf5f3aab785ded168917688d3c11e7eb856cc869/diff:/var/lib/docker/overlay2/dc493bca572baf2fff7143d6c551dc24acb22082e51a68200466da7510d37550/diff:/var/lib/docker/overlay2/e4fdd0a7dd5c09a7507c2e66bb44972d2db20fd3c58d2191eaa45291e1a47321/diff:/var/lib/docker/overlay2/4badf8f702a906675401fab2bb8327dafc5e0098b8a40e8ae31d3f4337c5539e/diff:/var/lib/docker/overlay2/d7a568f51d57ee1283764757ae3c91db612a7acb01906d88f7321359f27d4dcb/diff:/var/lib/docker/overlay2/123e8c40e6520b691981a6af80c211632618f36cb297ef8a28376985e8b9eddf/diff:/var/lib/docker/overlay2/fcfefbd58c332906200403538c8d881d31cbd97ecd5a2e223b198713731
51469/diff:/var/lib/docker/overlay2/a9e5b72b4ba31d329ac284cad558db42f24197775aaabfd188f2846a0a4ce96d/diff:/var/lib/docker/overlay2/7916e1ed717fc8628e14637400ba911d6e64209e501841613f69224c999b54f8/diff:/var/lib/docker/overlay2/3c121fd6f387f06c1b3a211eb1d7db64ffd333a873146a734de241e936ab5322/diff:/var/lib/docker/overlay2/9958539b4278fcc1c310de22689891aba5f238a94351d3c645b9dacba922241b/diff:/var/lib/docker/overlay2/c6c60f0d02b1e1aad2ea4fa1fb017d98e1ca4602689452dbd721342c0927fc6a/diff:/var/lib/docker/overlay2/e61aabb9eb2161801d4795e4a00f41afd54c504a52aeeef70d49d2a4f47fcd99/diff:/var/lib/docker/overlay2/a69e80d9160e6158cf9f37881d60928bf3221341b1fffe8d2855488233278102/diff:/var/lib/docker/overlay2/f76fd1ba3588d22f5228ab597df7a62e20a79217c1712dbc33e20061e12891c6/diff",
"MergedDir": "/var/lib/docker/overlay2/620d3c53f1c4fdcadb25cff396b74db4be0b1ed40a24640bdb9ec0022f02f474/merged",
"UpperDir": "/var/lib/docker/overlay2/620d3c53f1c4fdcadb25cff396b74db4be0b1ed40a24640bdb9ec0022f02f474/diff",
"WorkDir": "/var/lib/docker/overlay2/620d3c53f1c4fdcadb25cff396b74db4be0b1ed40a24640bdb9ec0022f02f474/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-20210915172222-22677",
"Source": "/var/lib/docker/volumes/functional-20210915172222-22677/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-20210915172222-22677",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-20210915172222-22677",
"name.minikube.sigs.k8s.io": "functional-20210915172222-22677",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "2962495ad098ffc3bbac4f77cf644170f64d85fbaec71de1f6849489d19f6ea2",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32782"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32781"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32778"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32780"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32779"
}
]
},
"SandboxKey": "/var/run/docker/netns/2962495ad098",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-20210915172222-22677": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"aaeb7540ca07"
],
"NetworkID": "2267c202d9196530bd594d5d4380c3520a043a5e07321634ce532a8fbf8cb7f0",
"EndpointID": "dec20c1c8b7a50fb86fed92f0f74d2baa04603920518b3feb695b2e63f96f4e9",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-20210915172222-22677 -n functional-20210915172222-22677
helpers_test.go:245: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======> post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p functional-20210915172222-22677 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p functional-20210915172222-22677 logs -n 25: (1.265099374s)
helpers_test.go:253: TestFunctional/serial/ComponentHealth logs:
-- stdout --
*
* ==> Audit <==
* |---------|--------------------------------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
| -p | nospam-20210915172138-22677 | nospam-20210915172138-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:22:03 UTC | Wed, 15 Sep 2021 17:22:03 UTC |
| | --log_dir | | | | | |
| | /tmp/nospam-20210915172138-22677 | | | | | |
| | unpause | | | | | |
| -p | nospam-20210915172138-22677 | nospam-20210915172138-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:22:03 UTC | Wed, 15 Sep 2021 17:22:04 UTC |
| | --log_dir | | | | | |
| | /tmp/nospam-20210915172138-22677 | | | | | |
| | unpause | | | | | |
| -p | nospam-20210915172138-22677 | nospam-20210915172138-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:22:04 UTC | Wed, 15 Sep 2021 17:22:04 UTC |
| | --log_dir | | | | | |
| | /tmp/nospam-20210915172138-22677 | | | | | |
| | unpause | | | | | |
| -p | nospam-20210915172138-22677 | nospam-20210915172138-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:22:04 UTC | Wed, 15 Sep 2021 17:22:20 UTC |
| | --log_dir | | | | | |
| | /tmp/nospam-20210915172138-22677 | | | | | |
| | stop | | | | | |
| -p | nospam-20210915172138-22677 | nospam-20210915172138-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:22:20 UTC | Wed, 15 Sep 2021 17:22:20 UTC |
| | --log_dir | | | | | |
| | /tmp/nospam-20210915172138-22677 | | | | | |
| | stop | | | | | |
| -p | nospam-20210915172138-22677 | nospam-20210915172138-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:22:20 UTC | Wed, 15 Sep 2021 17:22:20 UTC |
| | --log_dir | | | | | |
| | /tmp/nospam-20210915172138-22677 | | | | | |
| | stop | | | | | |
| delete | -p nospam-20210915172138-22677 | nospam-20210915172138-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:22:20 UTC | Wed, 15 Sep 2021 17:22:22 UTC |
| start | -p | functional-20210915172222-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:22:22 UTC | Wed, 15 Sep 2021 17:23:04 UTC |
| | functional-20210915172222-22677 | | | | | |
| | --memory=4000 | | | | | |
| | --apiserver-port=8441 | | | | | |
| | --wait=all --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p | functional-20210915172222-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:23:04 UTC | Wed, 15 Sep 2021 17:23:09 UTC |
| | functional-20210915172222-22677 | | | | | |
| | --alsologtostderr -v=8 | | | | | |
| -p | functional-20210915172222-22677 | functional-20210915172222-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:23:10 UTC | Wed, 15 Sep 2021 17:23:10 UTC |
| | cache add k8s.gcr.io/pause:3.1 | | | | | |
| -p | functional-20210915172222-22677 | functional-20210915172222-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:23:10 UTC | Wed, 15 Sep 2021 17:23:12 UTC |
| | cache add k8s.gcr.io/pause:3.3 | | | | | |
| -p | functional-20210915172222-22677 | functional-20210915172222-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:23:12 UTC | Wed, 15 Sep 2021 17:23:13 UTC |
| | cache add | | | | | |
| | k8s.gcr.io/pause:latest | | | | | |
| -p | functional-20210915172222-22677 cache add | functional-20210915172222-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:23:13 UTC | Wed, 15 Sep 2021 17:23:14 UTC |
| | minikube-local-cache-test:functional-20210915172222-22677 | | | | | |
| -p | functional-20210915172222-22677 cache delete | functional-20210915172222-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:23:14 UTC | Wed, 15 Sep 2021 17:23:14 UTC |
| | minikube-local-cache-test:functional-20210915172222-22677 | | | | | |
| cache | delete k8s.gcr.io/pause:3.3 | minikube | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:23:14 UTC | Wed, 15 Sep 2021 17:23:14 UTC |
| cache | list | minikube | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:23:15 UTC | Wed, 15 Sep 2021 17:23:15 UTC |
| -p | functional-20210915172222-22677 | functional-20210915172222-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:23:15 UTC | Wed, 15 Sep 2021 17:23:15 UTC |
| | ssh sudo crictl images | | | | | |
| -p | functional-20210915172222-22677 | functional-20210915172222-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:23:15 UTC | Wed, 15 Sep 2021 17:23:15 UTC |
| | ssh sudo docker rmi | | | | | |
| | k8s.gcr.io/pause:latest | | | | | |
| -p | functional-20210915172222-22677 | functional-20210915172222-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:23:16 UTC | Wed, 15 Sep 2021 17:23:17 UTC |
| | cache reload | | | | | |
| -p | functional-20210915172222-22677 | functional-20210915172222-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:23:17 UTC | Wed, 15 Sep 2021 17:23:17 UTC |
| | ssh sudo crictl inspecti | | | | | |
| | k8s.gcr.io/pause:latest | | | | | |
| cache | delete k8s.gcr.io/pause:3.1 | minikube | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:23:17 UTC | Wed, 15 Sep 2021 17:23:17 UTC |
| cache | delete k8s.gcr.io/pause:latest | minikube | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:23:17 UTC | Wed, 15 Sep 2021 17:23:17 UTC |
| -p | functional-20210915172222-22677 | functional-20210915172222-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:23:17 UTC | Wed, 15 Sep 2021 17:23:17 UTC |
| | kubectl -- --context | | | | | |
| | functional-20210915172222-22677 | | | | | |
| | get pods | | | | | |
| kubectl | --profile=functional-20210915172222-22677 | functional-20210915172222-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:23:17 UTC | Wed, 15 Sep 2021 17:23:17 UTC |
| | -- --context | | | | | |
| | functional-20210915172222-22677 get pods | | | | | |
| start | -p functional-20210915172222-22677 | functional-20210915172222-22677 | jenkins | v1.23.0 | Wed, 15 Sep 2021 17:23:17 UTC | Wed, 15 Sep 2021 17:23:38 UTC |
| | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision | | | | | |
| | --wait=all | | | | | |
|---------|--------------------------------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2021/09/15 17:23:17
Running on machine: debian-jenkins-agent-1
Binary: Built with gc go1.17 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0915 17:23:17.784680 55240 out.go:298] Setting OutFile to fd 1 ...
I0915 17:23:17.784748 55240 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0915 17:23:17.784750 55240 out.go:311] Setting ErrFile to fd 2...
I0915 17:23:17.784755 55240 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0915 17:23:17.784851 55240 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/bin
I0915 17:23:17.785040 55240 out.go:305] Setting JSON to false
I0915 17:23:17.821843 55240 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":11155,"bootTime":1631715442,"procs":213,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
I0915 17:23:17.821915 55240 start.go:121] virtualization: kvm guest
I0915 17:23:17.824920 55240 out.go:177] * [functional-20210915172222-22677] minikube v1.23.0 on Debian 9.13 (kvm/amd64)
I0915 17:23:17.826427 55240 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/kubeconfig
I0915 17:23:17.825075 55240 notify.go:169] Checking for updates...
I0915 17:23:17.827863 55240 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0915 17:23:17.829337 55240 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube
I0915 17:23:17.831765 55240 out.go:177] - MINIKUBE_LOCATION=12425
I0915 17:23:17.832545 55240 config.go:177] Loaded profile config "functional-20210915172222-22677": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
I0915 17:23:17.832586 55240 driver.go:343] Setting default libvirt URI to qemu:///system
I0915 17:23:17.879928 55240 docker.go:132] docker version: linux-19.03.15
I0915 17:23:17.880007 55240 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0915 17:23:17.961087 55240 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:181 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2021-09-15 17:23:17.914912705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0915 17:23:17.961198 55240 docker.go:237] overlay module found
I0915 17:23:17.963342 55240 out.go:177] * Using the docker driver based on existing profile
I0915 17:23:17.963373 55240 start.go:278] selected driver: docker
I0915 17:23:17.963379 55240 start.go:751] validating driver "docker" against &{Name:functional-20210915172222-22677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915172222-22677 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAdd
onImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0915 17:23:17.963469 55240 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0915 17:23:17.963908 55240 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0915 17:23:18.046675 55240 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:181 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2021-09-15 17:23:18.00085714 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0915 17:23:18.047239 55240 start_flags.go:737] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0915 17:23:18.047257 55240 cni.go:93] Creating CNI manager for ""
I0915 17:23:18.047263 55240 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0915 17:23:18.047268 55240 start_flags.go:278] config:
{Name:functional-20210915172222-22677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915172222-22677 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddo
nImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0915 17:23:18.049622 55240 out.go:177] * Starting control plane node functional-20210915172222-22677 in cluster functional-20210915172222-22677
I0915 17:23:18.049647 55240 cache.go:118] Beginning downloading kic base image for docker with docker
I0915 17:23:18.051079 55240 out.go:177] * Pulling base image ...
I0915 17:23:18.051104 55240 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
I0915 17:23:18.051132 55240 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4
I0915 17:23:18.051141 55240 cache.go:57] Caching tarball of preloaded images
I0915 17:23:18.051198 55240 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
I0915 17:23:18.051313 55240 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0915 17:23:18.051341 55240 cache.go:60] Finished verifying existence of preloaded tar for v1.22.1 on docker
I0915 17:23:18.051479 55240 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/profiles/functional-20210915172222-22677/config.json ...
I0915 17:23:18.158899 55240 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon, skipping pull
I0915 17:23:18.158914 55240 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in daemon, skipping load
I0915 17:23:18.158931 55240 cache.go:206] Successfully downloaded all kic artifacts
I0915 17:23:18.158960 55240 start.go:313] acquiring machines lock for functional-20210915172222-22677: {Name:mk42b53e1c6e7d6f1e375fd98da7378d94ddc715 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 17:23:18.159048 55240 start.go:317] acquired machines lock for "functional-20210915172222-22677" in 71.745µs
I0915 17:23:18.159062 55240 start.go:93] Skipping create...Using existing machine configuration
I0915 17:23:18.159066 55240 fix.go:55] fixHost starting:
I0915 17:23:18.159303 55240 cli_runner.go:115] Run: docker container inspect functional-20210915172222-22677 --format={{.State.Status}}
I0915 17:23:18.198143 55240 fix.go:108] recreateIfNeeded on functional-20210915172222-22677: state=Running err=<nil>
W0915 17:23:18.198162 55240 fix.go:134] unexpected machine state, will restart: <nil>
I0915 17:23:18.200378 55240 out.go:177] * Updating the running docker "functional-20210915172222-22677" container ...
I0915 17:23:18.200410 55240 machine.go:88] provisioning docker machine ...
I0915 17:23:18.200441 55240 ubuntu.go:169] provisioning hostname "functional-20210915172222-22677"
I0915 17:23:18.200488 55240 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915172222-22677
I0915 17:23:18.239132 55240 main.go:130] libmachine: Using SSH client type: native
I0915 17:23:18.239360 55240 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil> [] 0s} 127.0.0.1 32782 <nil> <nil>}
I0915 17:23:18.239374 55240 main.go:130] libmachine: About to run SSH command:
sudo hostname functional-20210915172222-22677 && echo "functional-20210915172222-22677" | sudo tee /etc/hostname
I0915 17:23:18.355264 55240 main.go:130] libmachine: SSH cmd err, output: <nil>: functional-20210915172222-22677
I0915 17:23:18.355352 55240 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915172222-22677
I0915 17:23:18.395462 55240 main.go:130] libmachine: Using SSH client type: native
I0915 17:23:18.395634 55240 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil> [] 0s} 127.0.0.1 32782 <nil> <nil>}
I0915 17:23:18.395657 55240 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-20210915172222-22677' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-20210915172222-22677/g' /etc/hosts;
else
echo '127.0.1.1 functional-20210915172222-22677' | sudo tee -a /etc/hosts;
fi
fi
I0915 17:23:18.506937 55240 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0915 17:23:18.506971 55240 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/key.pem Serve
rCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube}
I0915 17:23:18.507001 55240 ubuntu.go:177] setting up certificates
I0915 17:23:18.507019 55240 provision.go:83] configureAuth start
I0915 17:23:18.507083 55240 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20210915172222-22677
I0915 17:23:18.545724 55240 provision.go:138] copyHostCerts
I0915 17:23:18.545816 55240 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/key.pem, removing ...
I0915 17:23:18.545823 55240 exec_runner.go:208] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/key.pem
I0915 17:23:18.545870 55240 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/key.pem (1675 bytes)
I0915 17:23:18.545956 55240 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/ca.pem, removing ...
I0915 17:23:18.545964 55240 exec_runner.go:208] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/ca.pem
I0915 17:23:18.545982 55240 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/ca.pem (1078 bytes)
I0915 17:23:18.546036 55240 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/cert.pem, removing ...
I0915 17:23:18.546039 55240 exec_runner.go:208] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/cert.pem
I0915 17:23:18.546064 55240 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/cert.pem (1123 bytes)
I0915 17:23:18.546140 55240 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/ca-key.pem org=jenkins.functional-20210915172222-22677 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-20210915172222-22677]
I0915 17:23:18.659089 55240 provision.go:172] copyRemoteCerts
I0915 17:23:18.659142 55240 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0915 17:23:18.659180 55240 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915172222-22677
I0915 17:23:18.697976 55240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/machines/functional-20210915172222-22677/id_rsa Username:docker}
I0915 17:23:18.778706 55240 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0915 17:23:18.795571 55240 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
I0915 17:23:18.811243 55240 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0915 17:23:18.826703 55240 provision.go:86] duration metric: configureAuth took 319.676042ms
I0915 17:23:18.826716 55240 ubuntu.go:193] setting minikube options for container-runtime
I0915 17:23:18.826880 55240 config.go:177] Loaded profile config "functional-20210915172222-22677": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
I0915 17:23:18.826921 55240 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915172222-22677
I0915 17:23:18.866950 55240 main.go:130] libmachine: Using SSH client type: native
I0915 17:23:18.867095 55240 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil> [] 0s} 127.0.0.1 32782 <nil> <nil>}
I0915 17:23:18.867101 55240 main.go:130] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0915 17:23:18.971162 55240 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
I0915 17:23:18.971179 55240 ubuntu.go:71] root file system type: overlay
I0915 17:23:18.971404 55240 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0915 17:23:18.971463 55240 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915172222-22677
I0915 17:23:19.011545 55240 main.go:130] libmachine: Using SSH client type: native
I0915 17:23:19.011676 55240 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil> [] 0s} 127.0.0.1 32782 <nil> <nil>}
I0915 17:23:19.011729 55240 main.go:130] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0915 17:23:19.128586 55240 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0915 17:23:19.128650 55240 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915172222-22677
I0915 17:23:19.169638 55240 main.go:130] libmachine: Using SSH client type: native
I0915 17:23:19.169806 55240 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil> [] 0s} 127.0.0.1 32782 <nil> <nil>}
I0915 17:23:19.169817 55240 main.go:130] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0915 17:23:19.278864 55240 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0915 17:23:19.278885 55240 machine.go:91] provisioned docker machine in 1.078468896s
I0915 17:23:19.278894 55240 start.go:267] post-start starting for "functional-20210915172222-22677" (driver="docker")
I0915 17:23:19.278899 55240 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0915 17:23:19.278945 55240 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0915 17:23:19.278976 55240 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915172222-22677
I0915 17:23:19.318875 55240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/machines/functional-20210915172222-22677/id_rsa Username:docker}
I0915 17:23:19.398278 55240 ssh_runner.go:152] Run: cat /etc/os-release
I0915 17:23:19.401012 55240 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0915 17:23:19.401030 55240 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0915 17:23:19.401041 55240 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0915 17:23:19.401050 55240 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0915 17:23:19.401065 55240 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/addons for local assets ...
I0915 17:23:19.401111 55240 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/files for local assets ...
I0915 17:23:19.401177 55240 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/files/etc/ssl/certs/226772.pem -> 226772.pem in /etc/ssl/certs
I0915 17:23:19.401240 55240 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/files/etc/test/nested/copy/22677/hosts -> hosts in /etc/test/nested/copy/22677
I0915 17:23:19.401266 55240 ssh_runner.go:152] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/22677
I0915 17:23:19.407477 55240 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/files/etc/ssl/certs/226772.pem --> /etc/ssl/certs/226772.pem (1708 bytes)
I0915 17:23:19.423521 55240 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/files/etc/test/nested/copy/22677/hosts --> /etc/test/nested/copy/22677/hosts (40 bytes)
I0915 17:23:19.439425 55240 start.go:270] post-start completed in 160.517985ms
I0915 17:23:19.439480 55240 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0915 17:23:19.439523 55240 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915172222-22677
I0915 17:23:19.479301 55240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/machines/functional-20210915172222-22677/id_rsa Username:docker}
I0915 17:23:19.556370 55240 fix.go:57] fixHost completed within 1.397296017s
I0915 17:23:19.556385 55240 start.go:80] releasing machines lock for "functional-20210915172222-22677", held for 1.397331919s
I0915 17:23:19.556459 55240 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20210915172222-22677
I0915 17:23:19.594517 55240 ssh_runner.go:152] Run: systemctl --version
I0915 17:23:19.594552 55240 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915172222-22677
I0915 17:23:19.594584 55240 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
I0915 17:23:19.594632 55240 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915172222-22677
I0915 17:23:19.635574 55240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/machines/functional-20210915172222-22677/id_rsa Username:docker}
I0915 17:23:19.643957 55240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/machines/functional-20210915172222-22677/id_rsa Username:docker}
I0915 17:23:19.711529 55240 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
I0915 17:23:19.734272 55240 ssh_runner.go:152] Run: sudo systemctl cat docker.service
I0915 17:23:19.743372 55240 cruntime.go:255] skipping containerd shutdown because we are bound to it
I0915 17:23:19.743416 55240 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
I0915 17:23:19.752086 55240 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0915 17:23:19.764036 55240 ssh_runner.go:152] Run: sudo systemctl unmask docker.service
I0915 17:23:19.841949 55240 ssh_runner.go:152] Run: sudo systemctl enable docker.socket
I0915 17:23:19.919227 55240 ssh_runner.go:152] Run: sudo systemctl cat docker.service
I0915 17:23:19.928459 55240 ssh_runner.go:152] Run: sudo systemctl daemon-reload
I0915 17:23:20.006904 55240 ssh_runner.go:152] Run: sudo systemctl start docker
I0915 17:23:20.015902 55240 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
I0915 17:23:20.055219 55240 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
I0915 17:23:20.096073 55240 out.go:204] * Preparing Kubernetes v1.22.1 on Docker 20.10.8 ...
I0915 17:23:20.096144 55240 cli_runner.go:115] Run: docker network inspect functional-20210915172222-22677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0915 17:23:20.136031 55240 ssh_runner.go:152] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0915 17:23:20.141570 55240 out.go:177] - apiserver.enable-admission-plugins=NamespaceAutoProvision
I0915 17:23:20.141671 55240 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
I0915 17:23:20.141748 55240 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
I0915 17:23:20.173785 55240 docker.go:558] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.22.1
k8s.gcr.io/kube-controller-manager:v1.22.1
k8s.gcr.io/kube-proxy:v1.22.1
k8s.gcr.io/kube-scheduler:v1.22.1
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.5
kubernetesui/dashboard:v2.1.0
minikube-local-cache-test:functional-20210915172222-22677
k8s.gcr.io/pause:3.3
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.1
k8s.gcr.io/pause:latest
-- /stdout --
I0915 17:23:20.173798 55240 docker.go:489] Images already preloaded, skipping extraction
I0915 17:23:20.173834 55240 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
I0915 17:23:20.203981 55240 docker.go:558] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.22.1
k8s.gcr.io/kube-proxy:v1.22.1
k8s.gcr.io/kube-controller-manager:v1.22.1
k8s.gcr.io/kube-scheduler:v1.22.1
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.5
kubernetesui/dashboard:v2.1.0
minikube-local-cache-test:functional-20210915172222-22677
k8s.gcr.io/pause:3.3
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.1
k8s.gcr.io/pause:latest
-- /stdout --
I0915 17:23:20.204000 55240 cache_images.go:78] Images are preloaded, skipping loading
I0915 17:23:20.204048 55240 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}}
I0915 17:23:20.283918 55240 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
I0915 17:23:20.283949 55240 cni.go:93] Creating CNI manager for ""
I0915 17:23:20.283957 55240 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0915 17:23:20.283964 55240 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0915 17:23:20.283975 55240 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.22.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-20210915172222-22677 NodeName:functional-20210915172222-22677 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0915 17:23:20.284090 55240 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "functional-20210915172222-22677"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceAutoProvision"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.22.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0915 17:23:20.284172 55240 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.22.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=functional-20210915172222-22677 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.22.1 ClusterName:functional-20210915172222-22677 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
I0915 17:23:20.284220 55240 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.1
I0915 17:23:20.291167 55240 binaries.go:44] Found k8s binaries, skipping transfer
I0915 17:23:20.291223 55240 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0915 17:23:20.297564 55240 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
I0915 17:23:20.309246 55240 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0915 17:23:20.320872 55240 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1924 bytes)
I0915 17:23:20.332377 55240 ssh_runner.go:152] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0915 17:23:20.335192 55240 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/profiles/functional-20210915172222-22677 for IP: 192.168.49.2
I0915 17:23:20.335223 55240 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/ca.key
I0915 17:23:20.335234 55240 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/proxy-client-ca.key
I0915 17:23:20.335277 55240 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/profiles/functional-20210915172222-22677/client.key
I0915 17:23:20.335294 55240 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/profiles/functional-20210915172222-22677/apiserver.key.dd3b5fb2
I0915 17:23:20.335307 55240 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/profiles/functional-20210915172222-22677/proxy-client.key
I0915 17:23:20.335418 55240 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/22677.pem (1338 bytes)
W0915 17:23:20.335449 55240 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/22677_empty.pem, impossibly tiny 0 bytes
I0915 17:23:20.335462 55240 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/ca-key.pem (1679 bytes)
I0915 17:23:20.335480 55240 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/ca.pem (1078 bytes)
I0915 17:23:20.335498 55240 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/cert.pem (1123 bytes)
I0915 17:23:20.335514 55240 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/key.pem (1675 bytes)
I0915 17:23:20.335545 55240 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/files/etc/ssl/certs/226772.pem (1708 bytes)
I0915 17:23:20.336508 55240 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/profiles/functional-20210915172222-22677/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0915 17:23:20.352402 55240 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/profiles/functional-20210915172222-22677/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0915 17:23:20.368235 55240 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/profiles/functional-20210915172222-22677/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0915 17:23:20.383852 55240 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/profiles/functional-20210915172222-22677/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0915 17:23:20.399116 55240 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0915 17:23:20.414475 55240 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0915 17:23:20.431250 55240 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0915 17:23:20.450193 55240 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0915 17:23:20.466201 55240 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0915 17:23:20.481983 55240 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/certs/22677.pem --> /usr/share/ca-certificates/22677.pem (1338 bytes)
I0915 17:23:20.497433 55240 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/files/etc/ssl/certs/226772.pem --> /usr/share/ca-certificates/226772.pem (1708 bytes)
I0915 17:23:20.513096 55240 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0915 17:23:20.524600 55240 ssh_runner.go:152] Run: openssl version
I0915 17:23:20.529098 55240 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0915 17:23:20.535887 55240 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0915 17:23:20.538643 55240 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Sep 15 17:18 /usr/share/ca-certificates/minikubeCA.pem
I0915 17:23:20.538668 55240 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0915 17:23:20.543084 55240 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0915 17:23:20.549132 55240 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22677.pem && ln -fs /usr/share/ca-certificates/22677.pem /etc/ssl/certs/22677.pem"
I0915 17:23:20.555856 55240 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/22677.pem
I0915 17:23:20.558704 55240 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Sep 15 17:22 /usr/share/ca-certificates/22677.pem
I0915 17:23:20.558731 55240 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22677.pem
I0915 17:23:20.563198 55240 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22677.pem /etc/ssl/certs/51391683.0"
I0915 17:23:20.569275 55240 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/226772.pem && ln -fs /usr/share/ca-certificates/226772.pem /etc/ssl/certs/226772.pem"
I0915 17:23:20.575908 55240 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/226772.pem
I0915 17:23:20.578683 55240 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Sep 15 17:22 /usr/share/ca-certificates/226772.pem
I0915 17:23:20.578717 55240 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/226772.pem
I0915 17:23:20.583007 55240 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/226772.pem /etc/ssl/certs/3ec20f2e.0"
I0915 17:23:20.589065 55240 kubeadm.go:390] StartCluster: {Name:functional-20210915172222-22677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915172222-22677 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage
-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0915 17:23:20.589193 55240 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0915 17:23:20.620025 55240 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0915 17:23:20.627177 55240 kubeadm.go:401] found existing configuration files, will attempt cluster restart
I0915 17:23:20.627190 55240 kubeadm.go:600] restartCluster start
I0915 17:23:20.627233 55240 ssh_runner.go:152] Run: sudo test -d /data/minikube
I0915 17:23:20.633417 55240 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0915 17:23:20.634088 55240 kubeconfig.go:93] found "functional-20210915172222-22677" server: "https://192.168.49.2:8441"
I0915 17:23:20.635837 55240 ssh_runner.go:152] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0915 17:23:20.642073 55240 kubeadm.go:568] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml 2021-09-15 17:22:30.902268938 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new 2021-09-15 17:23:20.329577728 +0000
@@ -22,7 +22,7 @@
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
+ enable-admission-plugins: "NamespaceAutoProvision"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
-- /stdout --
I0915 17:23:20.642090 55240 kubeadm.go:1032] stopping kube-system containers ...
I0915 17:23:20.642127 55240 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0915 17:23:20.673463 55240 docker.go:390] Stopping containers: [4d217398099b f77561e61ffb 4cf87740c525 7b477389f1f0 697f6239cc8e 893b77e15033 9415956d2336 75c83d84914d 93bf0bae25d8 84170625d5e4 cf5e0878010f 06716f7d33c9 3d58c9a8080d 28f7f7a008ba]
I0915 17:23:20.673515 55240 ssh_runner.go:152] Run: docker stop 4d217398099b f77561e61ffb 4cf87740c525 7b477389f1f0 697f6239cc8e 893b77e15033 9415956d2336 75c83d84914d 93bf0bae25d8 84170625d5e4 cf5e0878010f 06716f7d33c9 3d58c9a8080d 28f7f7a008ba
I0915 17:23:25.902887 55240 ssh_runner.go:192] Completed: docker stop 4d217398099b f77561e61ffb 4cf87740c525 7b477389f1f0 697f6239cc8e 893b77e15033 9415956d2336 75c83d84914d 93bf0bae25d8 84170625d5e4 cf5e0878010f 06716f7d33c9 3d58c9a8080d 28f7f7a008ba: (5.22934882s)
I0915 17:23:25.902946 55240 ssh_runner.go:152] Run: sudo systemctl stop kubelet
I0915 17:23:25.941666 55240 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0915 17:23:25.949123 55240 kubeadm.go:154] found existing configuration files:
-rw------- 1 root root 5643 Sep 15 17:22 /etc/kubernetes/admin.conf
-rw------- 1 root root 5656 Sep 15 17:22 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2063 Sep 15 17:22 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5604 Sep 15 17:22 /etc/kubernetes/scheduler.conf
I0915 17:23:25.949177 55240 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I0915 17:23:25.955756 55240 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I0915 17:23:25.962776 55240 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I0915 17:23:25.969333 55240 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0915 17:23:25.969373 55240 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0915 17:23:25.975588 55240 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I0915 17:23:25.981899 55240 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0915 17:23:25.981929 55240 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0915 17:23:25.987780 55240 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0915 17:23:25.994122 55240 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0915 17:23:25.994132 55240 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0915 17:23:26.037641 55240 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0915 17:23:26.798012 55240 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0915 17:23:26.949582 55240 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0915 17:23:27.000625 55240 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0915 17:23:27.058657 55240 api_server.go:50] waiting for apiserver process to appear ...
I0915 17:23:27.058712 55240 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 17:23:27.078325 55240 api_server.go:70] duration metric: took 19.667322ms to wait for apiserver process to appear ...
I0915 17:23:27.078343 55240 api_server.go:86] waiting for apiserver healthz status ...
I0915 17:23:27.078354 55240 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0915 17:23:27.083448 55240 api_server.go:265] https://192.168.49.2:8441/healthz returned 200:
ok
I0915 17:23:27.090589 55240 api_server.go:139] control plane version: v1.22.1
I0915 17:23:27.090602 55240 api_server.go:129] duration metric: took 12.25495ms to wait for apiserver health ...
I0915 17:23:27.090610 55240 cni.go:93] Creating CNI manager for ""
I0915 17:23:27.090615 55240 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0915 17:23:27.090620 55240 system_pods.go:43] waiting for kube-system pods to appear ...
I0915 17:23:27.101963 55240 system_pods.go:59] 7 kube-system pods found
I0915 17:23:27.101982 55240 system_pods.go:61] "coredns-78fcd69978-4hsx6" [2a2c8f23-e485-406d-a27b-ab1190b13a07] Running
I0915 17:23:27.101987 55240 system_pods.go:61] "etcd-functional-20210915172222-22677" [f3b8b47d-ae44-4fee-887b-572b9ffbdd40] Running
I0915 17:23:27.101994 55240 system_pods.go:61] "kube-apiserver-functional-20210915172222-22677" [6bd75bd6-106f-46ae-a043-1f641eb07c39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0915 17:23:27.101997 55240 system_pods.go:61] "kube-controller-manager-functional-20210915172222-22677" [894edb22-9af5-43fa-bf97-3f92fa641b82] Running
I0915 17:23:27.102002 55240 system_pods.go:61] "kube-proxy-rb5vd" [e2565cb2-d7c1-4b6b-add4-2369c73bd486] Running
I0915 17:23:27.102005 55240 system_pods.go:61] "kube-scheduler-functional-20210915172222-22677" [c52c2543-2254-4d9b-9db3-5622731d9f08] Running
I0915 17:23:27.102009 55240 system_pods.go:61] "storage-provisioner" [e0cdcfac-77cf-4ab3-b433-0dcf5a820199] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0915 17:23:27.102015 55240 system_pods.go:74] duration metric: took 11.391515ms to wait for pod list to return data ...
I0915 17:23:27.102020 55240 node_conditions.go:102] verifying NodePressure condition ...
I0915 17:23:27.128489 55240 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
I0915 17:23:27.128515 55240 node_conditions.go:123] node cpu capacity is 8
I0915 17:23:27.128524 55240 node_conditions.go:105] duration metric: took 26.500841ms to run NodePressure ...
I0915 17:23:27.128538 55240 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0915 17:23:27.546489 55240 kubeadm.go:731] waiting for restarted kubelet to initialise ...
I0915 17:23:27.550668 55240 kubeadm.go:746] kubelet initialised
I0915 17:23:27.550677 55240 kubeadm.go:747] duration metric: took 4.172899ms waiting for restarted kubelet to initialise ...
I0915 17:23:27.550684 55240 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0915 17:23:27.554982 55240 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-4hsx6" in "kube-system" namespace to be "Ready" ...
I0915 17:23:27.566041 55240 pod_ready.go:97] node "functional-20210915172222-22677" hosting pod "coredns-78fcd69978-4hsx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20210915172222-22677" has status "Ready":"False"
I0915 17:23:27.566053 55240 pod_ready.go:81] duration metric: took 11.055229ms waiting for pod "coredns-78fcd69978-4hsx6" in "kube-system" namespace to be "Ready" ...
E0915 17:23:27.566064 55240 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20210915172222-22677" hosting pod "coredns-78fcd69978-4hsx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20210915172222-22677" has status "Ready":"False"
I0915 17:23:27.566089 55240 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-20210915172222-22677" in "kube-system" namespace to be "Ready" ...
I0915 17:23:27.634182 55240 pod_ready.go:97] node "functional-20210915172222-22677" hosting pod "etcd-functional-20210915172222-22677" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20210915172222-22677" has status "Ready":"False"
I0915 17:23:27.634196 55240 pod_ready.go:81] duration metric: took 68.100282ms waiting for pod "etcd-functional-20210915172222-22677" in "kube-system" namespace to be "Ready" ...
E0915 17:23:27.634206 55240 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20210915172222-22677" hosting pod "etcd-functional-20210915172222-22677" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20210915172222-22677" has status "Ready":"False"
I0915 17:23:27.634230 55240 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-20210915172222-22677" in "kube-system" namespace to be "Ready" ...
I0915 17:23:27.637877 55240 pod_ready.go:97] node "functional-20210915172222-22677" hosting pod "kube-apiserver-functional-20210915172222-22677" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20210915172222-22677" has status "Ready":"False"
I0915 17:23:27.637891 55240 pod_ready.go:81] duration metric: took 3.651263ms waiting for pod "kube-apiserver-functional-20210915172222-22677" in "kube-system" namespace to be "Ready" ...
E0915 17:23:27.637900 55240 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20210915172222-22677" hosting pod "kube-apiserver-functional-20210915172222-22677" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20210915172222-22677" has status "Ready":"False"
I0915 17:23:27.637923 55240 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-20210915172222-22677" in "kube-system" namespace to be "Ready" ...
I0915 17:23:27.644350 55240 pod_ready.go:97] node "functional-20210915172222-22677" hosting pod "kube-controller-manager-functional-20210915172222-22677" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20210915172222-22677" has status "Ready":"False"
I0915 17:23:27.644369 55240 pod_ready.go:81] duration metric: took 6.437732ms waiting for pod "kube-controller-manager-functional-20210915172222-22677" in "kube-system" namespace to be "Ready" ...
E0915 17:23:27.644378 55240 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20210915172222-22677" hosting pod "kube-controller-manager-functional-20210915172222-22677" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20210915172222-22677" has status "Ready":"False"
I0915 17:23:27.644488 55240 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rb5vd" in "kube-system" namespace to be "Ready" ...
I0915 17:23:27.949467 55240 pod_ready.go:92] pod "kube-proxy-rb5vd" in "kube-system" namespace has status "Ready":"True"
I0915 17:23:27.949477 55240 pod_ready.go:81] duration metric: took 304.972608ms waiting for pod "kube-proxy-rb5vd" in "kube-system" namespace to be "Ready" ...
I0915 17:23:27.949487 55240 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-20210915172222-22677" in "kube-system" namespace to be "Ready" ...
I0915 17:23:28.349622 55240 pod_ready.go:92] pod "kube-scheduler-functional-20210915172222-22677" in "kube-system" namespace has status "Ready":"True"
I0915 17:23:28.349632 55240 pod_ready.go:81] duration metric: took 400.133536ms waiting for pod "kube-scheduler-functional-20210915172222-22677" in "kube-system" namespace to be "Ready" ...
I0915 17:23:28.349641 55240 pod_ready.go:38] duration metric: took 798.94825ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0915 17:23:28.349657 55240 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0915 17:23:28.365930 55240 ops.go:34] apiserver oom_adj: -16
I0915 17:23:28.365943 55240 kubeadm.go:604] restartCluster took 7.738749293s
I0915 17:23:28.365950 55240 kubeadm.go:392] StartCluster complete in 7.776893396s
I0915 17:23:28.365965 55240 settings.go:142] acquiring lock: {Name:mk6a99e84c96a9413ba1dc41c8ebcd16ed986c49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 17:23:28.366056 55240 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/kubeconfig
I0915 17:23:28.366602 55240 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/kubeconfig: {Name:mk412c107d54eb842ff63733d51ecb9911a064a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 17:23:28.371180 55240 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "functional-20210915172222-22677" rescaled to 1
I0915 17:23:28.371343 55240 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0915 17:23:28.371306 55240 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}
I0915 17:23:28.373605 55240 out.go:177] * Verifying Kubernetes components...
I0915 17:23:28.371436 55240 addons.go:404] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
I0915 17:23:28.373664 55240 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
I0915 17:23:28.373704 55240 addons.go:65] Setting storage-provisioner=true in profile "functional-20210915172222-22677"
I0915 17:23:28.371565 55240 config.go:177] Loaded profile config "functional-20210915172222-22677": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
I0915 17:23:28.373720 55240 addons.go:153] Setting addon storage-provisioner=true in "functional-20210915172222-22677"
W0915 17:23:28.373726 55240 addons.go:165] addon storage-provisioner should already be in state true
I0915 17:23:28.373734 55240 addons.go:65] Setting default-storageclass=true in profile "functional-20210915172222-22677"
I0915 17:23:28.373749 55240 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-20210915172222-22677"
I0915 17:23:28.373757 55240 host.go:66] Checking if "functional-20210915172222-22677" exists ...
I0915 17:23:28.374071 55240 cli_runner.go:115] Run: docker container inspect functional-20210915172222-22677 --format={{.State.Status}}
I0915 17:23:28.374243 55240 cli_runner.go:115] Run: docker container inspect functional-20210915172222-22677 --format={{.State.Status}}
I0915 17:23:28.424462 55240 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0915 17:23:28.424601 55240 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0915 17:23:28.424609 55240 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0915 17:23:28.424710 55240 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915172222-22677
I0915 17:23:28.431810 55240 addons.go:153] Setting addon default-storageclass=true in "functional-20210915172222-22677"
W0915 17:23:28.431820 55240 addons.go:165] addon default-storageclass should already be in state true
I0915 17:23:28.431841 55240 host.go:66] Checking if "functional-20210915172222-22677" exists ...
I0915 17:23:28.432157 55240 cli_runner.go:115] Run: docker container inspect functional-20210915172222-22677 --format={{.State.Status}}
I0915 17:23:28.441627 55240 node_ready.go:35] waiting up to 6m0s for node "functional-20210915172222-22677" to be "Ready" ...
I0915 17:23:28.441700 55240 start.go:709] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0915 17:23:28.469507 55240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/machines/functional-20210915172222-22677/id_rsa Username:docker}
I0915 17:23:28.474832 55240 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
I0915 17:23:28.474847 55240 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0915 17:23:28.474910 55240 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915172222-22677
I0915 17:23:28.518001 55240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-19266-b9d7ac983dd68de861f6c962981dfd25d0b1477c/.minikube/machines/functional-20210915172222-22677/id_rsa Username:docker}
I0915 17:23:28.550306 55240 node_ready.go:49] node "functional-20210915172222-22677" has status "Ready":"True"
I0915 17:23:28.550314 55240 node_ready.go:38] duration metric: took 108.666503ms waiting for node "functional-20210915172222-22677" to be "Ready" ...
I0915 17:23:28.550321 55240 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0915 17:23:28.560212 55240 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0915 17:23:28.610855 55240 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0915 17:23:28.752098 55240 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-4hsx6" in "kube-system" namespace to be "Ready" ...
I0915 17:23:28.853963 55240 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0915 17:23:28.853987 55240 addons.go:406] enableAddons completed in 482.56587ms
I0915 17:23:29.152155 55240 pod_ready.go:92] pod "coredns-78fcd69978-4hsx6" in "kube-system" namespace has status "Ready":"True"
I0915 17:23:29.152164 55240 pod_ready.go:81] duration metric: took 400.048506ms waiting for pod "coredns-78fcd69978-4hsx6" in "kube-system" namespace to be "Ready" ...
I0915 17:23:29.152187 55240 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-20210915172222-22677" in "kube-system" namespace to be "Ready" ...
I0915 17:23:29.547739 55240 pod_ready.go:97] node "functional-20210915172222-22677" hosting pod "etcd-functional-20210915172222-22677" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "functional-20210915172222-22677": Get "https://192.168.49.2:8441/api/v1/nodes/functional-20210915172222-22677": dial tcp 192.168.49.2:8441: connect: connection refused
I0915 17:23:29.547755 55240 pod_ready.go:81] duration metric: took 395.561511ms waiting for pod "etcd-functional-20210915172222-22677" in "kube-system" namespace to be "Ready" ...
E0915 17:23:29.547765 55240 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20210915172222-22677" hosting pod "etcd-functional-20210915172222-22677" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "functional-20210915172222-22677": Get "https://192.168.49.2:8441/api/v1/nodes/functional-20210915172222-22677": dial tcp 192.168.49.2:8441: connect: connection refused
I0915 17:23:29.547790 55240 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-20210915172222-22677" in "kube-system" namespace to be "Ready" ...
I0915 17:23:29.747650 55240 pod_ready.go:97] error getting pod "kube-apiserver-functional-20210915172222-22677" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20210915172222-22677": dial tcp 192.168.49.2:8441: connect: connection refused
I0915 17:23:29.747664 55240 pod_ready.go:81] duration metric: took 199.867956ms waiting for pod "kube-apiserver-functional-20210915172222-22677" in "kube-system" namespace to be "Ready" ...
E0915 17:23:29.747674 55240 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-20210915172222-22677" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20210915172222-22677": dial tcp 192.168.49.2:8441: connect: connection refused
I0915 17:23:29.747694 55240 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-20210915172222-22677" in "kube-system" namespace to be "Ready" ...
I0915 17:23:29.947593 55240 pod_ready.go:97] error getting pod "kube-controller-manager-functional-20210915172222-22677" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20210915172222-22677": dial tcp 192.168.49.2:8441: connect: connection refused
I0915 17:23:29.947606 55240 pod_ready.go:81] duration metric: took 199.90612ms waiting for pod "kube-controller-manager-functional-20210915172222-22677" in "kube-system" namespace to be "Ready" ...
E0915 17:23:29.947616 55240 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-20210915172222-22677" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20210915172222-22677": dial tcp 192.168.49.2:8441: connect: connection refused
I0915 17:23:29.947637 55240 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rb5vd" in "kube-system" namespace to be "Ready" ...
I0915 17:23:30.147449 55240 pod_ready.go:97] error getting pod "kube-proxy-rb5vd" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rb5vd": dial tcp 192.168.49.2:8441: connect: connection refused
I0915 17:23:30.147464 55240 pod_ready.go:81] duration metric: took 199.820468ms waiting for pod "kube-proxy-rb5vd" in "kube-system" namespace to be "Ready" ...
E0915 17:23:30.147473 55240 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-rb5vd" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rb5vd": dial tcp 192.168.49.2:8441: connect: connection refused
I0915 17:23:30.147496 55240 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-20210915172222-22677" in "kube-system" namespace to be "Ready" ...
I0915 17:23:30.347180 55240 pod_ready.go:97] error getting pod "kube-scheduler-functional-20210915172222-22677" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20210915172222-22677": dial tcp 192.168.49.2:8441: connect: connection refused
I0915 17:23:30.347195 55240 pod_ready.go:81] duration metric: took 199.692735ms waiting for pod "kube-scheduler-functional-20210915172222-22677" in "kube-system" namespace to be "Ready" ...
E0915 17:23:30.347210 55240 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-20210915172222-22677" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20210915172222-22677": dial tcp 192.168.49.2:8441: connect: connection refused
I0915 17:23:30.347233 55240 pod_ready.go:38] duration metric: took 1.796905059s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0915 17:23:30.347248 55240 api_server.go:50] waiting for apiserver process to appear ...
I0915 17:23:30.347284 55240 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 17:23:30.866366 55240 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 17:23:31.366640 55240 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 17:23:31.387917 55240 api_server.go:70] duration metric: took 3.016547449s to wait for apiserver process to appear ...
I0915 17:23:31.387931 55240 api_server.go:86] waiting for apiserver healthz status ...
I0915 17:23:31.387939 55240 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0915 17:23:33.654717 55240 api_server.go:265] https://192.168.49.2:8441/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0915 17:23:33.654736 55240 api_server.go:101] status: https://192.168.49.2:8441/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0915 17:23:34.155426 55240 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0915 17:23:34.159708 55240 api_server.go:265] https://192.168.49.2:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0915 17:23:34.159740 55240 api_server.go:101] status: https://192.168.49.2:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0915 17:23:34.654940 55240 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0915 17:23:34.659342 55240 api_server.go:265] https://192.168.49.2:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0915 17:23:34.659357 55240 api_server.go:101] status: https://192.168.49.2:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0915 17:23:35.154864 55240 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0915 17:23:35.159411 55240 api_server.go:265] https://192.168.49.2:8441/healthz returned 200:
ok
I0915 17:23:35.165530 55240 api_server.go:139] control plane version: v1.22.1
I0915 17:23:35.165540 55240 api_server.go:129] duration metric: took 3.777604871s to wait for apiserver health ...
I0915 17:23:35.165546 55240 system_pods.go:43] waiting for kube-system pods to appear ...
I0915 17:23:35.171170 55240 system_pods.go:59] 7 kube-system pods found
I0915 17:23:35.171188 55240 system_pods.go:61] "coredns-78fcd69978-4hsx6" [2a2c8f23-e485-406d-a27b-ab1190b13a07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0915 17:23:35.171197 55240 system_pods.go:61] "etcd-functional-20210915172222-22677" [f3b8b47d-ae44-4fee-887b-572b9ffbdd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0915 17:23:35.171203 55240 system_pods.go:61] "kube-apiserver-functional-20210915172222-22677" [e0bc9b12-e146-452b-87d1-743baa85d5e9] Pending
I0915 17:23:35.171209 55240 system_pods.go:61] "kube-controller-manager-functional-20210915172222-22677" [894edb22-9af5-43fa-bf97-3f92fa641b82] Running
I0915 17:23:35.171213 55240 system_pods.go:61] "kube-proxy-rb5vd" [e2565cb2-d7c1-4b6b-add4-2369c73bd486] Running
I0915 17:23:35.171218 55240 system_pods.go:61] "kube-scheduler-functional-20210915172222-22677" [c52c2543-2254-4d9b-9db3-5622731d9f08] Running
I0915 17:23:35.171224 55240 system_pods.go:61] "storage-provisioner" [e0cdcfac-77cf-4ab3-b433-0dcf5a820199] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0915 17:23:35.171229 55240 system_pods.go:74] duration metric: took 5.679404ms to wait for pod list to return data ...
I0915 17:23:35.171236 55240 default_sa.go:34] waiting for default service account to be created ...
I0915 17:23:35.173581 55240 default_sa.go:45] found service account: "default"
I0915 17:23:35.173586 55240 default_sa.go:55] duration metric: took 2.347336ms for default service account to be created ...
I0915 17:23:35.173591 55240 system_pods.go:116] waiting for k8s-apps to be running ...
I0915 17:23:35.177620 55240 system_pods.go:86] 7 kube-system pods found
I0915 17:23:35.177635 55240 system_pods.go:89] "coredns-78fcd69978-4hsx6" [2a2c8f23-e485-406d-a27b-ab1190b13a07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0915 17:23:35.177649 55240 system_pods.go:89] "etcd-functional-20210915172222-22677" [f3b8b47d-ae44-4fee-887b-572b9ffbdd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0915 17:23:35.177656 55240 system_pods.go:89] "kube-apiserver-functional-20210915172222-22677" [e0bc9b12-e146-452b-87d1-743baa85d5e9] Pending
I0915 17:23:35.177662 55240 system_pods.go:89] "kube-controller-manager-functional-20210915172222-22677" [894edb22-9af5-43fa-bf97-3f92fa641b82] Running
I0915 17:23:35.177668 55240 system_pods.go:89] "kube-proxy-rb5vd" [e2565cb2-d7c1-4b6b-add4-2369c73bd486] Running
I0915 17:23:35.177673 55240 system_pods.go:89] "kube-scheduler-functional-20210915172222-22677" [c52c2543-2254-4d9b-9db3-5622731d9f08] Running
I0915 17:23:35.177680 55240 system_pods.go:89] "storage-provisioner" [e0cdcfac-77cf-4ab3-b433-0dcf5a820199] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0915 17:23:35.177701 55240 retry.go:31] will retry after 263.082536ms: missing components: kube-apiserver
I0915 17:23:35.445330 55240 system_pods.go:86] 7 kube-system pods found
I0915 17:23:35.445349 55240 system_pods.go:89] "coredns-78fcd69978-4hsx6" [2a2c8f23-e485-406d-a27b-ab1190b13a07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0915 17:23:35.445355 55240 system_pods.go:89] "etcd-functional-20210915172222-22677" [f3b8b47d-ae44-4fee-887b-572b9ffbdd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0915 17:23:35.445359 55240 system_pods.go:89] "kube-apiserver-functional-20210915172222-22677" [e0bc9b12-e146-452b-87d1-743baa85d5e9] Pending
I0915 17:23:35.445365 55240 system_pods.go:89] "kube-controller-manager-functional-20210915172222-22677" [894edb22-9af5-43fa-bf97-3f92fa641b82] Running
I0915 17:23:35.445368 55240 system_pods.go:89] "kube-proxy-rb5vd" [e2565cb2-d7c1-4b6b-add4-2369c73bd486] Running
I0915 17:23:35.445371 55240 system_pods.go:89] "kube-scheduler-functional-20210915172222-22677" [c52c2543-2254-4d9b-9db3-5622731d9f08] Running
I0915 17:23:35.445375 55240 system_pods.go:89] "storage-provisioner" [e0cdcfac-77cf-4ab3-b433-0dcf5a820199] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0915 17:23:35.445390 55240 retry.go:31] will retry after 381.329545ms: missing components: kube-apiserver
I0915 17:23:35.831399 55240 system_pods.go:86] 7 kube-system pods found
I0915 17:23:35.831417 55240 system_pods.go:89] "coredns-78fcd69978-4hsx6" [2a2c8f23-e485-406d-a27b-ab1190b13a07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0915 17:23:35.831424 55240 system_pods.go:89] "etcd-functional-20210915172222-22677" [f3b8b47d-ae44-4fee-887b-572b9ffbdd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0915 17:23:35.831428 55240 system_pods.go:89] "kube-apiserver-functional-20210915172222-22677" [e0bc9b12-e146-452b-87d1-743baa85d5e9] Pending
I0915 17:23:35.831432 55240 system_pods.go:89] "kube-controller-manager-functional-20210915172222-22677" [894edb22-9af5-43fa-bf97-3f92fa641b82] Running
I0915 17:23:35.831435 55240 system_pods.go:89] "kube-proxy-rb5vd" [e2565cb2-d7c1-4b6b-add4-2369c73bd486] Running
I0915 17:23:35.831438 55240 system_pods.go:89] "kube-scheduler-functional-20210915172222-22677" [c52c2543-2254-4d9b-9db3-5622731d9f08] Running
I0915 17:23:35.831442 55240 system_pods.go:89] "storage-provisioner" [e0cdcfac-77cf-4ab3-b433-0dcf5a820199] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0915 17:23:35.831456 55240 retry.go:31] will retry after 422.765636ms: missing components: kube-apiserver
I0915 17:23:36.259635 55240 system_pods.go:86] 7 kube-system pods found
I0915 17:23:36.259652 55240 system_pods.go:89] "coredns-78fcd69978-4hsx6" [2a2c8f23-e485-406d-a27b-ab1190b13a07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0915 17:23:36.259659 55240 system_pods.go:89] "etcd-functional-20210915172222-22677" [f3b8b47d-ae44-4fee-887b-572b9ffbdd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0915 17:23:36.259663 55240 system_pods.go:89] "kube-apiserver-functional-20210915172222-22677" [e0bc9b12-e146-452b-87d1-743baa85d5e9] Pending
I0915 17:23:36.259667 55240 system_pods.go:89] "kube-controller-manager-functional-20210915172222-22677" [894edb22-9af5-43fa-bf97-3f92fa641b82] Running
I0915 17:23:36.259670 55240 system_pods.go:89] "kube-proxy-rb5vd" [e2565cb2-d7c1-4b6b-add4-2369c73bd486] Running
I0915 17:23:36.259674 55240 system_pods.go:89] "kube-scheduler-functional-20210915172222-22677" [c52c2543-2254-4d9b-9db3-5622731d9f08] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0915 17:23:36.259679 55240 system_pods.go:89] "storage-provisioner" [e0cdcfac-77cf-4ab3-b433-0dcf5a820199] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0915 17:23:36.259697 55240 retry.go:31] will retry after 473.074753ms: missing components: kube-apiserver
I0915 17:23:36.738346 55240 system_pods.go:86] 7 kube-system pods found
I0915 17:23:36.738369 55240 system_pods.go:89] "coredns-78fcd69978-4hsx6" [2a2c8f23-e485-406d-a27b-ab1190b13a07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0915 17:23:36.738376 55240 system_pods.go:89] "etcd-functional-20210915172222-22677" [f3b8b47d-ae44-4fee-887b-572b9ffbdd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0915 17:23:36.738380 55240 system_pods.go:89] "kube-apiserver-functional-20210915172222-22677" [e0bc9b12-e146-452b-87d1-743baa85d5e9] Pending
I0915 17:23:36.738384 55240 system_pods.go:89] "kube-controller-manager-functional-20210915172222-22677" [894edb22-9af5-43fa-bf97-3f92fa641b82] Running
I0915 17:23:36.738387 55240 system_pods.go:89] "kube-proxy-rb5vd" [e2565cb2-d7c1-4b6b-add4-2369c73bd486] Running
I0915 17:23:36.738391 55240 system_pods.go:89] "kube-scheduler-functional-20210915172222-22677" [c52c2543-2254-4d9b-9db3-5622731d9f08] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0915 17:23:36.738396 55240 system_pods.go:89] "storage-provisioner" [e0cdcfac-77cf-4ab3-b433-0dcf5a820199] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0915 17:23:36.738408 55240 retry.go:31] will retry after 587.352751ms: missing components: kube-apiserver
I0915 17:23:37.332270 55240 system_pods.go:86] 7 kube-system pods found
I0915 17:23:37.332290 55240 system_pods.go:89] "coredns-78fcd69978-4hsx6" [2a2c8f23-e485-406d-a27b-ab1190b13a07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0915 17:23:37.332297 55240 system_pods.go:89] "etcd-functional-20210915172222-22677" [f3b8b47d-ae44-4fee-887b-572b9ffbdd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0915 17:23:37.332301 55240 system_pods.go:89] "kube-apiserver-functional-20210915172222-22677" [e0bc9b12-e146-452b-87d1-743baa85d5e9] Pending
I0915 17:23:37.332305 55240 system_pods.go:89] "kube-controller-manager-functional-20210915172222-22677" [894edb22-9af5-43fa-bf97-3f92fa641b82] Running
I0915 17:23:37.332308 55240 system_pods.go:89] "kube-proxy-rb5vd" [e2565cb2-d7c1-4b6b-add4-2369c73bd486] Running
I0915 17:23:37.332313 55240 system_pods.go:89] "kube-scheduler-functional-20210915172222-22677" [c52c2543-2254-4d9b-9db3-5622731d9f08] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0915 17:23:37.332319 55240 system_pods.go:89] "storage-provisioner" [e0cdcfac-77cf-4ab3-b433-0dcf5a820199] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0915 17:23:37.332331 55240 retry.go:31] will retry after 834.206799ms: missing components: kube-apiserver
I0915 17:23:38.171687 55240 system_pods.go:86] 7 kube-system pods found
I0915 17:23:38.171708 55240 system_pods.go:89] "coredns-78fcd69978-4hsx6" [2a2c8f23-e485-406d-a27b-ab1190b13a07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0915 17:23:38.171715 55240 system_pods.go:89] "etcd-functional-20210915172222-22677" [f3b8b47d-ae44-4fee-887b-572b9ffbdd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0915 17:23:38.171721 55240 system_pods.go:89] "kube-apiserver-functional-20210915172222-22677" [e0bc9b12-e146-452b-87d1-743baa85d5e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0915 17:23:38.171726 55240 system_pods.go:89] "kube-controller-manager-functional-20210915172222-22677" [894edb22-9af5-43fa-bf97-3f92fa641b82] Running
I0915 17:23:38.171730 55240 system_pods.go:89] "kube-proxy-rb5vd" [e2565cb2-d7c1-4b6b-add4-2369c73bd486] Running
I0915 17:23:38.171734 55240 system_pods.go:89] "kube-scheduler-functional-20210915172222-22677" [c52c2543-2254-4d9b-9db3-5622731d9f08] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0915 17:23:38.171740 55240 system_pods.go:89] "storage-provisioner" [e0cdcfac-77cf-4ab3-b433-0dcf5a820199] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0915 17:23:38.171745 55240 system_pods.go:126] duration metric: took 2.998151779s to wait for k8s-apps to be running ...
I0915 17:23:38.171753 55240 system_svc.go:44] waiting for kubelet service to be running ....
I0915 17:23:38.171792 55240 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
I0915 17:23:38.181920 55240 system_svc.go:56] duration metric: took 10.158087ms WaitForService to wait for kubelet.
I0915 17:23:38.181940 55240 kubeadm.go:547] duration metric: took 9.810575761s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0915 17:23:38.181965 55240 node_conditions.go:102] verifying NodePressure condition ...
I0915 17:23:38.184824 55240 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
I0915 17:23:38.184835 55240 node_conditions.go:123] node cpu capacity is 8
I0915 17:23:38.184846 55240 node_conditions.go:105] duration metric: took 2.876748ms to run NodePressure ...
I0915 17:23:38.184855 55240 start.go:231] waiting for startup goroutines ...
I0915 17:23:38.227483 55240 start.go:462] kubectl: 1.20.5, cluster: 1.22.1 (minor skew: 2)
I0915 17:23:38.229801 55240 out.go:177]
W0915 17:23:38.229966 55240 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.1.
I0915 17:23:38.231684 55240 out.go:177] - Want kubectl v1.22.1? Try 'minikube kubectl -- get pods -A'
I0915 17:23:38.233338 55240 out.go:177] * Done! kubectl is now configured to use "functional-20210915172222-22677" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Logs begin at Wed 2021-09-15 17:22:25 UTC, end at Wed 2021-09-15 17:23:39 UTC. --
Sep 15 17:22:29 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:22:29.236135482Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8
Sep 15 17:22:29 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:22:29.236204364Z" level=info msg="Daemon has completed initialization"
Sep 15 17:22:29 functional-20210915172222-22677 systemd[1]: Started Docker Application Container Engine.
Sep 15 17:22:29 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:22:29.252799850Z" level=info msg="API listen on [::]:2376"
Sep 15 17:22:29 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:22:29.256197488Z" level=info msg="API listen on /var/run/docker.sock"
Sep 15 17:23:02 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:02.545346183Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Sep 15 17:23:20 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:20.849725713Z" level=info msg="ignoring event" container=4cf87740c5251d61fbf31189ff1c44924f402cbb8cc6ece5413b6592c7e6817c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:20 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:20.850574683Z" level=info msg="ignoring event" container=7b477389f1f0eb6d9215da693bd451a29bdbad4cdd29caca221161ad3b2ef6a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:20 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:20.852842719Z" level=info msg="ignoring event" container=28f7f7a008ba626f59a38f6fd323584340438256d2ce83406cb8bb8e36636e7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:20 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:20.852911835Z" level=info msg="ignoring event" container=4d217398099bd36a781c9266a2f05e5d74e7aa03d033b3bd40758e1aa1fbeea6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:20 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:20.854461564Z" level=info msg="ignoring event" container=84170625d5e461ded06afa122c12d796597725ac1c07a938b81ec30871818645 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:20 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:20.928185781Z" level=info msg="ignoring event" container=697f6239cc8e69a6630f74f0a4f98eba50e54efe2aff7d8e65ddc253ef3afe37 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:20 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:20.928435779Z" level=info msg="ignoring event" container=3d58c9a8080de2777ea16b8d1d172144b6c301e8dc3077083783974ec1f8e3b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:20 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:20.929521957Z" level=info msg="ignoring event" container=cf5e0878010f633c87d25aa39c469f9253b7d8e3165de6fc4a5e5fe0ce25d40e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:20 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:20.946108255Z" level=info msg="ignoring event" container=893b77e15033238dca43476775b645307ff535374e879336916535454614ebab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:20 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:20.947235884Z" level=info msg="ignoring event" container=9415956d233644483cc4e5416b470608183967bea1f2e20c96e2b5ac74e10f61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:20 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:20.948536162Z" level=info msg="ignoring event" container=06716f7d33c9691159830fb919b80151f9bddfb1cd742e9776051aef84f63e2e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:21 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:21.169536467Z" level=info msg="ignoring event" container=75c83d84914d795749f6fd084d910cf442e30b449865cf56ad0bf46260bcf0cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:21 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:21.173303976Z" level=info msg="ignoring event" container=93bf0bae25d870516a295b4e78d8143b64c2ec7aac171c7f55b23f37e5f9d3c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:22 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:22.042938235Z" level=info msg="ignoring event" container=169e3f70645d2b9003889bb0982750d365397ff89fcd05270ff896a0dde5a679 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:25 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:25.863467551Z" level=info msg="ignoring event" container=f77561e61ffbcb24350d0af8e8090e94338279348d7a08b3539a361baf4aa059 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:29 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:29.046496745Z" level=info msg="ignoring event" container=6c4926bac9cb53123ce965cef87735685a88279ee8ff43276da6d4c5d7b5dfc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:29 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:29.413056951Z" level=info msg="ignoring event" container=5b08bde81065c05bebacb1d4837a9d6c3eed76572a648cab28357256767eb967 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:29 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:29.441995074Z" level=info msg="ignoring event" container=c3d62f3fa4929d884ab4ead3988166fbbe4a3507ae9356d6a5fca91d422df89e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 17:23:35 functional-20210915172222-22677 dockerd[449]: time="2021-09-15T17:23:35.993262459Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
8238cb59f7622 6e38f40d628db 2 seconds ago Running storage-provisioner 2 0211c31b4ff93
d5500dc4dfc8e 8d147537fb7d1 4 seconds ago Running coredns 1 c8a74f44948b3
36fd794c8c2a5 f30469a2491a5 9 seconds ago Running kube-apiserver 1 dd32609f2bb32
6c4926bac9cb5 f30469a2491a5 11 seconds ago Exited kube-apiserver 0 dd32609f2bb32
a2ab135084448 0048118155842 18 seconds ago Running etcd 1 292ebc67df501
573579ce53ca4 aca5ededae9c8 18 seconds ago Running kube-scheduler 1 52ed9c67b1367
169e3f70645d2 6e38f40d628db 18 seconds ago Exited storage-provisioner 1 0211c31b4ff93
8fafb362b6651 6e002eb89a881 18 seconds ago Running kube-controller-manager 1 cc25f554ce35f
e19e143acb81a 36c4ebbc9d979 18 seconds ago Running kube-proxy 1 3e5992df7aeb9
f77561e61ffbc 8d147537fb7d1 37 seconds ago Exited coredns 0 7b477389f1f0e
697f6239cc8e6 36c4ebbc9d979 38 seconds ago Exited kube-proxy 0 893b77e150332
9415956d23364 0048118155842 About a minute ago Exited etcd 0 28f7f7a008ba6
75c83d84914d7 aca5ededae9c8 About a minute ago Exited kube-scheduler 0 cf5e0878010f6
84170625d5e46 6e002eb89a881 About a minute ago Exited kube-controller-manager 0 06716f7d33c96
*
* ==> coredns [d5500dc4dfc8] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
CoreDNS-1.8.4
linux/amd64, go1.16.4, 053c4d5
*
* ==> coredns [f77561e61ffb] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
CoreDNS-1.8.4
linux/amd64, go1.16.4, 053c4d5
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> describe nodes <==
* Name: functional-20210915172222-22677
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-20210915172222-22677
kubernetes.io/os=linux
minikube.k8s.io/commit=0d321606059ead2904f4f5ddd59a9a7026c7ee04
minikube.k8s.io/name=functional-20210915172222-22677
minikube.k8s.io/updated_at=2021_09_15T17_22_42_0700
minikube.k8s.io/version=v1.23.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 15 Sep 2021 17:22:39 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-20210915172222-22677
AcquireTime: <unset>
RenewTime: Wed, 15 Sep 2021 17:23:37 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 15 Sep 2021 17:23:27 +0000 Wed, 15 Sep 2021 17:22:35 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 15 Sep 2021 17:23:27 +0000 Wed, 15 Sep 2021 17:22:35 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 15 Sep 2021 17:23:27 +0000 Wed, 15 Sep 2021 17:22:35 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 15 Sep 2021 17:23:27 +0000 Wed, 15 Sep 2021 17:23:27 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-20210915172222-22677
Capacity:
cpu: 8
ephemeral-storage: 309568300Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32951368Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 309568300Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32951368Ki
pods: 110
System Info:
Machine ID: 4b5e5cdd53d44f5ab575bb522d42acca
System UUID: 9709380d-391d-4eb1-ada0-c4b1a3c9c46d
Boot ID: d1f48c72-ac32-4c43-bb38-cbe6179197b8
Kernel Version: 4.9.0-16-amd64
OS Image: Ubuntu 20.04.2 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.8
Kubelet Version: v1.22.1
Kube-Proxy Version: v1.22.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-78fcd69978-4hsx6 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 39s
kube-system etcd-functional-20210915172222-22677 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 56s
kube-system kube-apiserver-functional-20210915172222-22677 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6s
kube-system kube-controller-manager-functional-20210915172222-22677 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 56s
kube-system kube-proxy-rb5vd 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 39s
kube-system kube-scheduler-functional-20210915172222-22677 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 56s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 37s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 57s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 57s kubelet Node functional-20210915172222-22677 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 57s kubelet Node functional-20210915172222-22677 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 57s kubelet Node functional-20210915172222-22677 status is now: NodeHasSufficientPID
Normal NodeNotReady 57s kubelet Node functional-20210915172222-22677 status is now: NodeNotReady
Normal NodeAllocatableEnforced 57s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 47s kubelet Node functional-20210915172222-22677 status is now: NodeReady
Normal Starting 12s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 12s kubelet Node functional-20210915172222-22677 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12s kubelet Node functional-20210915172222-22677 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12s kubelet Node functional-20210915172222-22677 status is now: NodeHasSufficientPID
Normal NodeNotReady 12s kubelet Node functional-20210915172222-22677 status is now: NodeNotReady
Normal NodeAllocatableEnforced 12s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 12s kubelet Node functional-20210915172222-22677 status is now: NodeReady
*
* ==> dmesg <==
* [ +0.007958] kvm [12002]: vcpu0, guest rIP: 0xffffffff9984d464 unhandled rdmsr: 0x606
[ +14.551020] kvm [12135]: vcpu0, guest rIP: 0xffffffffa5c4d464 unhandled rdmsr: 0x140
[ +0.008547] kvm [12135]: vcpu0, guest rIP: 0xffffffffa5c4d464 unhandled rdmsr: 0x4e
[ +0.017019] kvm [12135]: vcpu1, guest rIP: 0xffffffffa5c4d464 unhandled rdmsr: 0x140
[ +0.009780] kvm [12135]: vcpu1, guest rIP: 0xffffffffa5c4d464 unhandled rdmsr: 0x4e
[ +3.788892] kvm [12135]: vcpu0, guest rIP: 0xffffffffa5c4d464 unhandled rdmsr: 0x64e
[ +0.007976] kvm [12135]: vcpu0, guest rIP: 0xffffffffa5c4d464 unhandled rdmsr: 0x34
[ +0.007996] kvm [12135]: vcpu0, guest rIP: 0xffffffffa5c4d464 unhandled rdmsr: 0x606
[Sep15 17:06] kvm [12958]: vcpu0, guest rIP: 0xffffffffa284d464 unhandled rdmsr: 0x140
[ +0.008523] kvm [12958]: vcpu0, guest rIP: 0xffffffffa284d464 unhandled rdmsr: 0x4e
[ +0.017123] kvm [12958]: vcpu1, guest rIP: 0xffffffffa284d464 unhandled rdmsr: 0x140
[ +0.009081] kvm [12958]: vcpu1, guest rIP: 0xffffffffa284d464 unhandled rdmsr: 0x4e
[ +3.763565] kvm [12958]: vcpu0, guest rIP: 0xffffffffa284d464 unhandled rdmsr: 0x64e
[ +0.007979] kvm [12958]: vcpu0, guest rIP: 0xffffffffa284d464 unhandled rdmsr: 0x34
[ +0.007905] kvm [12958]: vcpu0, guest rIP: 0xffffffffa284d464 unhandled rdmsr: 0x606
[Sep15 17:09] kvm [13331]: vcpu0, guest rIP: 0xffffffff8a04d464 unhandled rdmsr: 0x140
[ +0.008636] kvm [13331]: vcpu0, guest rIP: 0xffffffff8a04d464 unhandled rdmsr: 0x4e
[ +0.016084] kvm [13331]: vcpu1, guest rIP: 0xffffffff8a04d464 unhandled rdmsr: 0x140
[ +0.008591] kvm [13331]: vcpu1, guest rIP: 0xffffffff8a04d464 unhandled rdmsr: 0x4e
[ +3.627636] kvm [13331]: vcpu1, guest rIP: 0xffffffff8a04d464 unhandled rdmsr: 0x64e
[ +0.007975] kvm [13331]: vcpu1, guest rIP: 0xffffffff8a04d464 unhandled rdmsr: 0x34
[ +0.007906] kvm [13331]: vcpu1, guest rIP: 0xffffffff8a04d464 unhandled rdmsr: 0x606
[Sep15 17:18] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 17:21] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 17:22] cgroup: cgroup2: unknown option "nsdelegate"
*
* ==> etcd [9415956d2336] <==
* {"level":"info","ts":"2021-09-15T17:22:59.118Z","caller":"traceutil/trace.go:171","msg":"trace[363790470] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:365; }","duration":"3.866968738s","start":"2021-09-15T17:22:55.251Z","end":"2021-09-15T17:22:59.118Z","steps":["trace[363790470] 'range keys from in-memory index tree' (duration: 3.866379723s)"],"step_count":1}
{"level":"warn","ts":"2021-09-15T17:22:59.118Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T17:22:55.251Z","time spent":"3.867035609s","remote":"127.0.0.1:56606","response type":"/etcdserverpb.KV/Range","request count":0,"request size":34,"response count":1,"response size":377,"request content":"key:\"/registry/namespaces/kube-system\" "}
{"level":"info","ts":"2021-09-15T17:22:59.118Z","caller":"traceutil/trace.go:171","msg":"trace[531476913] linearizableReadLoop","detail":"{readStateIndex:378; appliedIndex:376; }","duration":"2.869206819s","start":"2021-09-15T17:22:56.248Z","end":"2021-09-15T17:22:59.118Z","steps":["trace[531476913] 'read index received' (duration: 2.713208222s)","trace[531476913] 'applied index is now lower than readState.Index' (duration: 155.997659ms)"],"step_count":2}
{"level":"warn","ts":"2021-09-15T17:22:59.118Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T17:22:57.799Z","time spent":"1.318925707s","remote":"127.0.0.1:56592","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":788,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-functional-20210915172222-22677.16a50ead5fc830b4\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-functional-20210915172222-22677.16a50ead5fc830b4\" value_size:678 lease:8128007674518411922 >> failure:<>"}
{"level":"warn","ts":"2021-09-15T17:22:59.118Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.32105317s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2021-09-15T17:22:59.118Z","caller":"traceutil/trace.go:171","msg":"trace[948890341] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:366; }","duration":"1.321084139s","start":"2021-09-15T17:22:57.797Z","end":"2021-09-15T17:22:59.118Z","steps":["trace[948890341] 'agreement among raft nodes before linearized reading' (duration: 1.32101262s)"],"step_count":1}
{"level":"warn","ts":"2021-09-15T17:22:59.118Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T17:22:57.797Z","time spent":"1.321127866s","remote":"127.0.0.1:56702","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
{"level":"warn","ts":"2021-09-15T17:23:00.358Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.072288847s","expected-duration":"100ms","prefix":"","request":"header:<ID:8128007674518412223 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/expand-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/expand-controller\" value_size:124 >> failure:<>>","response":"size:16"}
{"level":"info","ts":"2021-09-15T17:23:00.358Z","caller":"traceutil/trace.go:171","msg":"trace[254876860] linearizableReadLoop","detail":"{readStateIndex:380; appliedIndex:379; }","duration":"921.80628ms","start":"2021-09-15T17:22:59.437Z","end":"2021-09-15T17:23:00.358Z","steps":["trace[254876860] 'read index received' (duration: 43.762µs)","trace[254876860] 'applied index is now lower than readState.Index' (duration: 921.761698ms)"],"step_count":2}
{"level":"info","ts":"2021-09-15T17:23:00.359Z","caller":"traceutil/trace.go:171","msg":"trace[719780911] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"1.229612389s","start":"2021-09-15T17:22:59.129Z","end":"2021-09-15T17:23:00.359Z","steps":["trace[719780911] 'process raft request' (duration: 157.061372ms)","trace[719780911] 'compare' (duration: 1.072145206s)"],"step_count":2}
{"level":"warn","ts":"2021-09-15T17:23:00.359Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"570.987543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2021-09-15T17:23:00.359Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"921.990437ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2021-09-15T17:23:00.359Z","caller":"traceutil/trace.go:171","msg":"trace[1470066984] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:368; }","duration":"571.033407ms","start":"2021-09-15T17:22:59.788Z","end":"2021-09-15T17:23:00.359Z","steps":["trace[1470066984] 'agreement among raft nodes before linearized reading' (duration: 570.905592ms)"],"step_count":1}
{"level":"warn","ts":"2021-09-15T17:23:00.359Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T17:22:59.129Z","time spent":"1.229685021s","remote":"127.0.0.1:56616","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":186,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/expand-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/expand-controller\" value_size:124 >> failure:<>"}
{"level":"info","ts":"2021-09-15T17:23:00.359Z","caller":"traceutil/trace.go:171","msg":"trace[841877134] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:368; }","duration":"922.052739ms","start":"2021-09-15T17:22:59.437Z","end":"2021-09-15T17:23:00.359Z","steps":["trace[841877134] 'agreement among raft nodes before linearized reading' (duration: 921.926409ms)"],"step_count":1}
{"level":"warn","ts":"2021-09-15T17:23:00.359Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T17:22:59.437Z","time spent":"922.10863ms","remote":"127.0.0.1:56616","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts/default/default\" "}
{"level":"warn","ts":"2021-09-15T17:23:00.359Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T17:22:59.788Z","time spent":"571.070986ms","remote":"127.0.0.1:56702","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
{"level":"info","ts":"2021-09-15T17:23:20.743Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2021-09-15T17:23:20.743Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20210915172222-22677","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
WARNING: 2021/09/15 17:23:20 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2021/09/15 17:23:20 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2021-09-15T17:23:20.831Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2021-09-15T17:23:20.832Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2021-09-15T17:23:20.833Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2021-09-15T17:23:20.833Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20210915172222-22677","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
*
* ==> etcd [a2ab13508444] <==
* {"level":"info","ts":"2021-09-15T17:23:21.942Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2021-09-15T17:23:21.944Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.0","cluster-id":"fa54960ea34d58be","cluster-version":"3.5"}
{"level":"info","ts":"2021-09-15T17:23:21.946Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2021-09-15T17:23:21.946Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2021-09-15T17:23:21.946Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2021-09-15T17:23:21.946Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2021-09-15T17:23:21.947Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2021-09-15T17:23:21.947Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2021-09-15T17:23:21.947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
{"level":"info","ts":"2021-09-15T17:23:21.947Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2021-09-15T17:23:21.947Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","from":"3.5","to":"3.5"}
{"level":"info","ts":"2021-09-15T17:23:22.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
{"level":"info","ts":"2021-09-15T17:23:22.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
{"level":"info","ts":"2021-09-15T17:23:22.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2021-09-15T17:23:22.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
{"level":"info","ts":"2021-09-15T17:23:22.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
{"level":"info","ts":"2021-09-15T17:23:22.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
{"level":"info","ts":"2021-09-15T17:23:22.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
{"level":"info","ts":"2021-09-15T17:23:22.439Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20210915172222-22677 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2021-09-15T17:23:22.439Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2021-09-15T17:23:22.439Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
{"level":"info","ts":"2021-09-15T17:23:22.439Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2021-09-15T17:23:22.439Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
{"level":"info","ts":"2021-09-15T17:23:22.441Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2021-09-15T17:23:22.441Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
*
* ==> kernel <==
* 17:23:39 up 3:06, 0 users, load average: 1.98, 2.19, 3.21
Linux functional-20210915172222-22677 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"
*
* ==> kube-apiserver [36fd794c8c2a] <==
* I0915 17:23:33.647278 1 available_controller.go:491] Starting AvailableConditionController
I0915 17:23:33.647287 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0915 17:23:33.647304 1 apf_controller.go:299] Starting API Priority and Fairness config controller
I0915 17:23:33.647731 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0915 17:23:33.647741 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I0915 17:23:33.647761 1 controller.go:85] Starting OpenAPI controller
I0915 17:23:33.647788 1 naming_controller.go:291] Starting NamingConditionController
I0915 17:23:33.647802 1 establishing_controller.go:76] Starting EstablishingController
I0915 17:23:33.647816 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0915 17:23:33.647828 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0915 17:23:33.647843 1 crd_finalizer.go:266] Starting CRDFinalizer
E0915 17:23:33.739895 1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0915 17:23:33.750255 1 apf_controller.go:304] Running API Priority and Fairness config worker
I0915 17:23:33.751621 1 shared_informer.go:247] Caches are synced for crd-autoregister
I0915 17:23:33.845165 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I0915 17:23:33.848195 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0915 17:23:33.848959 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0915 17:23:33.852583 1 shared_informer.go:247] Caches are synced for node_authorizer
I0915 17:23:33.927446 1 cache.go:39] Caches are synced for autoregister controller
I0915 17:23:34.643482 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0915 17:23:34.643515 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0915 17:23:34.652359 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0915 17:23:37.392619 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0915 17:23:38.051166 1 controller.go:611] quota admission added evaluator for: endpoints
I0915 17:23:38.053644 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-apiserver [6c4926bac9cb] <==
* I0915 17:23:29.013483 1 server.go:553] external host was not specified, using 192.168.49.2
I0915 17:23:29.013901 1 server.go:161] Version: v1.22.1
Error: failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use
*
* ==> kube-controller-manager [84170625d5e4] <==
* I0915 17:23:00.464772 1 shared_informer.go:247] Caches are synced for cidrallocator
I0915 17:23:00.465176 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
I0915 17:23:00.471277 1 range_allocator.go:373] Set node functional-20210915172222-22677 PodCIDR to [10.244.0.0/24]
I0915 17:23:00.476464 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-zj6pr"
I0915 17:23:00.527826 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0915 17:23:00.531418 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-4hsx6"
I0915 17:23:00.537300 1 shared_informer.go:247] Caches are synced for daemon sets
W0915 17:23:00.540775 1 endpointslice_controller.go:306] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
I0915 17:23:00.552764 1 shared_informer.go:247] Caches are synced for taint
I0915 17:23:00.552855 1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone:
I0915 17:23:00.552881 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
W0915 17:23:00.552935 1 node_lifecycle_controller.go:1013] Missing timestamp for Node functional-20210915172222-22677. Assuming now as a timestamp.
I0915 17:23:00.552971 1 node_lifecycle_controller.go:1214] Controller detected that zone is now in state Normal.
I0915 17:23:00.553033 1 event.go:291] "Event occurred" object="functional-20210915172222-22677" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20210915172222-22677 event: Registered Node functional-20210915172222-22677 in Controller"
I0915 17:23:00.553460 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rb5vd"
I0915 17:23:00.654277 1 shared_informer.go:247] Caches are synced for attach detach
I0915 17:23:00.663629 1 shared_informer.go:247] Caches are synced for resource quota
I0915 17:23:00.666057 1 shared_informer.go:247] Caches are synced for resource quota
I0915 17:23:00.703068 1 shared_informer.go:247] Caches are synced for crt configmap
I0915 17:23:00.703463 1 shared_informer.go:247] Caches are synced for bootstrap_signer
I0915 17:23:00.903914 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
I0915 17:23:00.914334 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-zj6pr"
I0915 17:23:01.087425 1 shared_informer.go:247] Caches are synced for garbage collector
I0915 17:23:01.130801 1 shared_informer.go:247] Caches are synced for garbage collector
I0915 17:23:01.130829 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-controller-manager [8fafb362b665] <==
* I0915 17:23:38.052675 1 node_lifecycle_controller.go:1214] Controller detected that zone is now in state Normal.
I0915 17:23:38.052821 1 event.go:291] "Event occurred" object="functional-20210915172222-22677" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20210915172222-22677 event: Registered Node functional-20210915172222-22677 in Controller"
I0915 17:23:38.053596 1 shared_informer.go:247] Caches are synced for TTL
I0915 17:23:38.056408 1 shared_informer.go:247] Caches are synced for TTL after finished
I0915 17:23:38.059677 1 shared_informer.go:247] Caches are synced for disruption
I0915 17:23:38.059696 1 disruption.go:371] Sending events to api server.
I0915 17:23:38.059701 1 shared_informer.go:247] Caches are synced for namespace
I0915 17:23:38.067283 1 shared_informer.go:247] Caches are synced for HPA
I0915 17:23:38.068712 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
I0915 17:23:38.073888 1 shared_informer.go:247] Caches are synced for PV protection
I0915 17:23:38.077176 1 shared_informer.go:247] Caches are synced for ReplicationController
I0915 17:23:38.079412 1 shared_informer.go:247] Caches are synced for job
I0915 17:23:38.134973 1 shared_informer.go:247] Caches are synced for cronjob
I0915 17:23:38.167406 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I0915 17:23:38.233949 1 shared_informer.go:247] Caches are synced for stateful set
I0915 17:23:38.254395 1 shared_informer.go:247] Caches are synced for expand
I0915 17:23:38.264009 1 shared_informer.go:247] Caches are synced for attach detach
I0915 17:23:38.270594 1 shared_informer.go:247] Caches are synced for persistent volume
I0915 17:23:38.272934 1 shared_informer.go:247] Caches are synced for PVC protection
I0915 17:23:38.276125 1 shared_informer.go:247] Caches are synced for ephemeral
I0915 17:23:38.279433 1 shared_informer.go:247] Caches are synced for resource quota
I0915 17:23:38.330642 1 shared_informer.go:247] Caches are synced for resource quota
I0915 17:23:38.731256 1 shared_informer.go:247] Caches are synced for garbage collector
I0915 17:23:38.771737 1 shared_informer.go:247] Caches are synced for garbage collector
I0915 17:23:38.771768 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-proxy [697f6239cc8e] <==
* I0915 17:23:02.056327 1 node.go:172] Successfully retrieved node IP: 192.168.49.2
I0915 17:23:02.056402 1 server_others.go:140] Detected node IP 192.168.49.2
W0915 17:23:02.056426 1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
I0915 17:23:02.134802 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I0915 17:23:02.134852 1 server_others.go:212] Using iptables Proxier.
I0915 17:23:02.134865 1 server_others.go:219] creating dualStackProxier for iptables.
W0915 17:23:02.134887 1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I0915 17:23:02.135232 1 server.go:649] Version: v1.22.1
I0915 17:23:02.136020 1 config.go:315] Starting service config controller
I0915 17:23:02.136093 1 shared_informer.go:240] Waiting for caches to sync for service config
I0915 17:23:02.136046 1 config.go:224] Starting endpoint slice config controller
I0915 17:23:02.136114 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
E0915 17:23:02.138816 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"functional-20210915172222-22677.16a50eae62c6c808", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc048a8dd881aa34f, ext:192364634, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-functional-20210915172222-22677", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Nam
e:"functional-20210915172222-22677", UID:"functional-20210915172222-22677", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "functional-20210915172222-22677.16a50eae62c6c808" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
I0915 17:23:02.236482 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0915 17:23:02.236501 1 shared_informer.go:247] Caches are synced for service config
*
* ==> kube-proxy [e19e143acb81] <==
* E0915 17:23:21.847993 1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20210915172222-22677": dial tcp 192.168.49.2:8441: connect: connection refused
I0915 17:23:25.640196 1 node.go:172] Successfully retrieved node IP: 192.168.49.2
I0915 17:23:25.640233 1 server_others.go:140] Detected node IP 192.168.49.2
W0915 17:23:25.640257 1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
I0915 17:23:25.746660 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I0915 17:23:25.746712 1 server_others.go:212] Using iptables Proxier.
I0915 17:23:25.746727 1 server_others.go:219] creating dualStackProxier for iptables.
W0915 17:23:25.746748 1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I0915 17:23:25.751119 1 server.go:649] Version: v1.22.1
I0915 17:23:25.752010 1 config.go:224] Starting endpoint slice config controller
I0915 17:23:25.752041 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0915 17:23:25.752153 1 config.go:315] Starting service config controller
I0915 17:23:25.752166 1 shared_informer.go:240] Waiting for caches to sync for service config
E0915 17:23:25.827749 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"functional-20210915172222-22677.16a50eb3e26482c1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc048a8e36cd0396e, ext:3999567202, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-functional-20210915172222-22677", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Na
me:"functional-20210915172222-22677", UID:"functional-20210915172222-22677", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "functional-20210915172222-22677.16a50eb3e26482c1" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
I0915 17:23:25.852585 1 shared_informer.go:247] Caches are synced for service config
I0915 17:23:25.852609 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [573579ce53ca] <==
* I0915 17:23:22.657418 1 serving.go:347] Generated self-signed cert in-memory
I0915 17:23:25.734651 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0915 17:23:25.734700 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0915 17:23:25.734760 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0915 17:23:25.734816 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0915 17:23:25.734657 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0915 17:23:25.734908 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0915 17:23:25.735056 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
I0915 17:23:25.735534 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0915 17:23:25.835475 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
I0915 17:23:25.835476 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0915 17:23:25.835642 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0915 17:23:33.678412 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
E0915 17:23:33.678412 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
E0915 17:23:33.678451 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
E0915 17:23:33.678421 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
E0915 17:23:33.678468 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
E0915 17:23:33.678485 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
E0915 17:23:33.678492 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
E0915 17:23:33.678518 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
E0915 17:23:33.678526 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
E0915 17:23:33.678548 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
E0915 17:23:33.678561 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
E0915 17:23:33.678562 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
E0915 17:23:33.678654 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
*
* ==> kube-scheduler [75c83d84914d] <==
* I0915 17:22:39.430288 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
E0915 17:22:39.438415 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0915 17:22:39.438778 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0915 17:22:39.438806 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0915 17:22:39.438869 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0915 17:22:39.438891 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0915 17:22:39.438974 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0915 17:22:39.439042 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0915 17:22:39.439112 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0915 17:22:39.439587 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0915 17:22:39.439731 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0915 17:22:39.439787 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0915 17:22:39.439836 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0915 17:22:39.439802 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0915 17:22:39.440001 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0915 17:22:39.440529 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0915 17:22:40.312682 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0915 17:22:40.406964 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0915 17:22:40.500422 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0915 17:22:40.528413 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0915 17:22:40.569927 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0915 17:22:43.830072 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0915 17:23:20.730896 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0915 17:23:20.731976 1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
I0915 17:23:20.732094 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
*
* ==> kubelet <==
* -- Logs begin at Wed 2021-09-15 17:22:25 UTC, end at Wed 2021-09-15 17:23:39 UTC. --
Sep 15 17:23:30 functional-20210915172222-22677 kubelet[5938]: E0915 17:23:30.393843 5938 kubelet.go:1701] "Failed creating a mirror pod for" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods\": dial tcp 192.168.49.2:8441: connect: connection refused" pod="kube-system/kube-scheduler-functional-20210915172222-22677"
Sep 15 17:23:30 functional-20210915172222-22677 kubelet[5938]: E0915 17:23:30.593721 5938 kubelet.go:1701] "Failed creating a mirror pod for" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods\": dial tcp 192.168.49.2:8441: connect: connection refused" pod="kube-system/kube-controller-manager-functional-20210915172222-22677"
Sep 15 17:23:30 functional-20210915172222-22677 kubelet[5938]: E0915 17:23:30.793279 5938 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20210915172222-22677\": dial tcp 192.168.49.2:8441: connect: connection refused" pod="kube-system/kube-apiserver-functional-20210915172222-22677"
Sep 15 17:23:30 functional-20210915172222-22677 kubelet[5938]: I0915 17:23:30.793380 5938 scope.go:110] "RemoveContainer" containerID="6c4926bac9cb53123ce965cef87735685a88279ee8ff43276da6d4c5d7b5dfc0"
Sep 15 17:23:30 functional-20210915172222-22677 kubelet[5938]: E0915 17:23:30.993891 5938 projected.go:199] Error preparing data for projected volume kube-api-access-2zzh7 for pod kube-system/storage-provisioner: failed to fetch token: Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner/token": dial tcp 192.168.49.2:8441: connect: connection refused
Sep 15 17:23:30 functional-20210915172222-22677 kubelet[5938]: E0915 17:23:30.994002 5938 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/e0cdcfac-77cf-4ab3-b433-0dcf5a820199-kube-api-access-2zzh7 podName:e0cdcfac-77cf-4ab3-b433-0dcf5a820199 nodeName:}" failed. No retries permitted until 2021-09-15 17:23:31.993975576 +0000 UTC m=+5.018218287 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zzh7" (UniqueName: "kubernetes.io/projected/e0cdcfac-77cf-4ab3-b433-0dcf5a820199-kube-api-access-2zzh7") pod "storage-provisioner" (UID: "e0cdcfac-77cf-4ab3-b433-0dcf5a820199") : failed to fetch token: Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner/token": dial tcp 192.168.49.2:8441: connect: connection refused
Sep 15 17:23:31 functional-20210915172222-22677 kubelet[5938]: I0915 17:23:31.366761 5938 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=9a25ab830682419b4d663e848415821b path="/var/lib/kubelet/pods/9a25ab830682419b4d663e848415821b/volumes"
Sep 15 17:23:31 functional-20210915172222-22677 kubelet[5938]: I0915 17:23:31.683378 5938 kubelet.go:1683] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-20210915172222-22677" podUID=6bd75bd6-106f-46ae-a043-1f641eb07c39
Sep 15 17:23:33 functional-20210915172222-22677 kubelet[5938]: E0915 17:23:33.668039 5938 projected.go:199] Error preparing data for projected volume kube-api-access-2zzh7 for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:functional-20210915172222-22677" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210915172222-22677' and this object
Sep 15 17:23:33 functional-20210915172222-22677 kubelet[5938]: E0915 17:23:33.668134 5938 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/e0cdcfac-77cf-4ab3-b433-0dcf5a820199-kube-api-access-2zzh7 podName:e0cdcfac-77cf-4ab3-b433-0dcf5a820199 nodeName:}" failed. No retries permitted until 2021-09-15 17:23:35.668109933 +0000 UTC m=+8.692352643 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zzh7" (UniqueName: "kubernetes.io/projected/e0cdcfac-77cf-4ab3-b433-0dcf5a820199-kube-api-access-2zzh7") pod "storage-provisioner" (UID: "e0cdcfac-77cf-4ab3-b433-0dcf5a820199") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:functional-20210915172222-22677" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210915172222-22677' and this object
Sep 15 17:23:33 functional-20210915172222-22677 kubelet[5938]: E0915 17:23:33.668202 5938 projected.go:199] Error preparing data for projected volume kube-api-access-q99w7 for pod kube-system/coredns-78fcd69978-4hsx6: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:functional-20210915172222-22677" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210915172222-22677' and this object
Sep 15 17:23:33 functional-20210915172222-22677 kubelet[5938]: E0915 17:23:33.668240 5938 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/2a2c8f23-e485-406d-a27b-ab1190b13a07-kube-api-access-q99w7 podName:2a2c8f23-e485-406d-a27b-ab1190b13a07 nodeName:}" failed. No retries permitted until 2021-09-15 17:23:34.668227935 +0000 UTC m=+7.692470645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-q99w7" (UniqueName: "kubernetes.io/projected/2a2c8f23-e485-406d-a27b-ab1190b13a07-kube-api-access-q99w7") pod "coredns-78fcd69978-4hsx6" (UID: "2a2c8f23-e485-406d-a27b-ab1190b13a07") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:functional-20210915172222-22677" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210915172222-22677' and this object
Sep 15 17:23:33 functional-20210915172222-22677 kubelet[5938]: E0915 17:23:33.669198 5938 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:functional-20210915172222-22677" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210915172222-22677' and this object
Sep 15 17:23:33 functional-20210915172222-22677 kubelet[5938]: E0915 17:23:33.669279 5938 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-20210915172222-22677" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210915172222-22677' and this object
Sep 15 17:23:33 functional-20210915172222-22677 kubelet[5938]: E0915 17:23:33.669351 5938 projected.go:199] Error preparing data for projected volume kube-api-access-dsgd5 for pod kube-system/kube-proxy-rb5vd: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:functional-20210915172222-22677" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210915172222-22677' and this object
Sep 15 17:23:33 functional-20210915172222-22677 kubelet[5938]: E0915 17:23:33.669408 5938 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/e2565cb2-d7c1-4b6b-add4-2369c73bd486-kube-api-access-dsgd5 podName:e2565cb2-d7c1-4b6b-add4-2369c73bd486 nodeName:}" failed. No retries permitted until 2021-09-15 17:23:34.669390044 +0000 UTC m=+7.693632751 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dsgd5" (UniqueName: "kubernetes.io/projected/e2565cb2-d7c1-4b6b-add4-2369c73bd486-kube-api-access-dsgd5") pod "kube-proxy-rb5vd" (UID: "e2565cb2-d7c1-4b6b-add4-2369c73bd486") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:functional-20210915172222-22677" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210915172222-22677' and this object
Sep 15 17:23:33 functional-20210915172222-22677 kubelet[5938]: E0915 17:23:33.669474 5938 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:functional-20210915172222-22677" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210915172222-22677' and this object
Sep 15 17:23:33 functional-20210915172222-22677 kubelet[5938]: I0915 17:23:33.933010 5938 kubelet.go:1688] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-20210915172222-22677"
Sep 15 17:23:34 functional-20210915172222-22677 kubelet[5938]: I0915 17:23:34.741227 5938 kubelet.go:1683] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-20210915172222-22677" podUID=6bd75bd6-106f-46ae-a043-1f641eb07c39
Sep 15 17:23:35 functional-20210915172222-22677 kubelet[5938]: I0915 17:23:35.987162 5938 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-4hsx6 through plugin: invalid network status for"
Sep 15 17:23:35 functional-20210915172222-22677 kubelet[5938]: I0915 17:23:35.991439 5938 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c8a74f44948b39644d9fd5a16ee98426a2608b8ff8228b221a58fffc03daaeaa"
Sep 15 17:23:36 functional-20210915172222-22677 kubelet[5938]: I0915 17:23:36.196848 5938 kubelet.go:1683] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-20210915172222-22677" podUID=6bd75bd6-106f-46ae-a043-1f641eb07c39
Sep 15 17:23:36 functional-20210915172222-22677 kubelet[5938]: I0915 17:23:36.999483 5938 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-4hsx6 through plugin: invalid network status for"
Sep 15 17:23:37 functional-20210915172222-22677 kubelet[5938]: I0915 17:23:37.397939 5938 scope.go:110] "RemoveContainer" containerID="169e3f70645d2b9003889bb0982750d365397ff89fcd05270ff896a0dde5a679"
Sep 15 17:23:38 functional-20210915172222-22677 kubelet[5938]: I0915 17:23:38.045942 5938 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
*
* ==> storage-provisioner [169e3f70645d] <==
* I0915 17:23:21.956692 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0915 17:23:21.959501 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
*
* ==> storage-provisioner [8238cb59f762] <==
* I0915 17:23:37.610545 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0915 17:23:37.619161 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0915 17:23:37.619224 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
-- /stdout --
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-20210915172222-22677 -n functional-20210915172222-22677
helpers_test.go:262: (dbg) Run: kubectl --context functional-20210915172222-22677 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods:
helpers_test.go:273: ======> post-mortem[TestFunctional/serial/ComponentHealth]: describe non-running pods <======
helpers_test.go:276: (dbg) Run: kubectl --context functional-20210915172222-22677 describe pod
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context functional-20210915172222-22677 describe pod : exit status 1 (48.172473ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:278: kubectl --context functional-20210915172222-22677 describe pod : exit status 1
--- FAIL: TestFunctional/serial/ComponentHealth (2.30s)