=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:284: registry stabilized in 34.2655ms
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
=== CONT TestAddons/parallel/Registry
helpers_test.go:340: "registry-dzdlw" [4a872b2d-a2b1-46f9-9afd-c52b6647383f] Running
addons_test.go:286: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.01401437s
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:340: "registry-proxy-xfrxz" [19d31762-bc36-413f-8533-e97b57d38a28] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
addons_test.go:289: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007653227s
addons_test.go:294: (dbg) Run: kubectl --context addons-20210811003021-1387367 delete po -l run=registry-test --now
addons_test.go:299: (dbg) Run: kubectl --context addons-20210811003021-1387367 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:299: (dbg) Done: kubectl --context addons-20210811003021-1387367 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.51520973s)
addons_test.go:313: (dbg) Run: out/minikube-linux-arm64 -p addons-20210811003021-1387367 ip
2021/08/11 00:32:54 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:32:54 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:32:54 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:32:55 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:32:55 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:32:57 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:32:57 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:33:01 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:01 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:33:09 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:09 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:33:09 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:09 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:33:10 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:10 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:33:12 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:12 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:33:16 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:16 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:33:24 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:25 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:33:25 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:25 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:33:26 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:26 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:33:28 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:28 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:33:32 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:32 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:33:40 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:41 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:33:41 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:41 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:33:42 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:42 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:33:44 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:44 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:33:48 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:48 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:33:56 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:58 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:33:58 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:58 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:33:59 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:59 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:34:01 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:01 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:34:05 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:05 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:34:13 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:15 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:34:15 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:15 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:34:16 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:16 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:34:18 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:18 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:34:22 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:22 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:34:30 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:35 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:34:35 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:35 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:34:36 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:36 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:34:38 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:38 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:34:42 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:42 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:34:50 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:53 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:34:53 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:53 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:34:54 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:54 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:34:56 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:56 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:35:00 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:35:00 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:35:08 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:35:17 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:35:17 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:35:17 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:35:18 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:35:18 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:35:20 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:35:20 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:35:24 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:35:24 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:35:32 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:339: failed to check external access to http://192.168.49.2:5000: GET http://192.168.49.2:5000 giving up after 5 attempt(s): Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:342: (dbg) Run: out/minikube-linux-arm64 -p addons-20210811003021-1387367 addons disable registry --alsologtostderr -v=1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======> post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:232: (dbg) Run: docker inspect addons-20210811003021-1387367
helpers_test.go:236: (dbg) docker inspect addons-20210811003021-1387367:
-- stdout --
[
{
"Id": "5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120",
"Created": "2021-08-11T00:30:25.788956339Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1388276,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-08-11T00:30:26.269675899Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
"ResolvConfPath": "/var/lib/docker/containers/5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120/hostname",
"HostsPath": "/var/lib/docker/containers/5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120/hosts",
"LogPath": "/var/lib/docker/containers/5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120/5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120-json.log",
"Name": "/addons-20210811003021-1387367",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-20210811003021-1387367:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-20210811003021-1387367",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/d7bbc23e4363abd5f8bd0174d71334b755b7a86aae7b28260349c3000af1c495-init/diff:/var/lib/docker/overlay2/b901673749d4c23cf617379d66c43acbc184f898f580a05fca5568725e6ccb6a/diff:/var/lib/docker/overlay2/3fd19ee2c9d46b2cdb8a592d42d57d9efdba3a556c98f5018ae07caa15606bc4/diff:/var/lib/docker/overlay2/31f547e426e6dfa6ed65e0b7cb851c18e771f23a77868552685aacb2e126dc0a/diff:/var/lib/docker/overlay2/6ae53b304b800757235653c63c7879ae7f05b4d4f0400f7f6fadc53e2059aa5a/diff:/var/lib/docker/overlay2/7702d6ed068e8b454dd11af18cb8cb76986898926e3e3130c2d7f638062de9ee/diff:/var/lib/docker/overlay2/e67b0ce82f4d6c092698530106fa38495aa54b2fe5600ac022386a3d17165948/diff:/var/lib/docker/overlay2/d3ddbdbbe88f3c5a0867637eeb78a22790daa833a6179cdd4690044007911336/diff:/var/lib/docker/overlay2/10c48536a5187dfe63f1c090ec32daef76e852de7cc4a7e7f96a2fa1510314cc/diff:/var/lib/docker/overlay2/2186c26bc131feb045ca64a28e2cc431fed76b32afc3d3587916b98a9af807fe/diff:/var/lib/docker/overlay2/292c9d
aaf6d60ee235c7ac65bfc1b61b9c0d360ebbebcf08ba5efeb1b40de075/diff:/var/lib/docker/overlay2/9bc521e84afeeb62fa312e9eb2afc367bc449dbf66f412e17eb2338f79d6f920/diff:/var/lib/docker/overlay2/b1a93cf97438f068af56026fc52aaa329c46e4cac3d8f91c8d692871adaf451a/diff:/var/lib/docker/overlay2/b8e42d5d9e69e72a11e3cad660b9f29335dfc6cd1b4a6aebdbf5e6f313efe749/diff:/var/lib/docker/overlay2/6a6eaef3ce06d941ce606aaebc530878ce54d24a51c7947ca936a3a6eb4dac16/diff:/var/lib/docker/overlay2/62370bd2a6e35ce796647f79ccf9906147c91e8ceee31e401bdb7842371c6bee/diff:/var/lib/docker/overlay2/e673dacc1c6815100340b85af47aeb90eb5fca87778caec1d728de5b8cc9a36e/diff:/var/lib/docker/overlay2/bd17ea1d8cd8e2f88bd7fb4cee8a097365f6b81efc91f203a0504873fc0916a6/diff:/var/lib/docker/overlay2/d2f15007a2a5c037903647e5dd0d6882903fa163d23087bbd8eadeaf3618377b/diff:/var/lib/docker/overlay2/0bbc7fe1b1d62a2db9b4f402e6bc8781815951ae6df608307fd50a2fde242253/diff:/var/lib/docker/overlay2/d124fa0a0ea67ad0362eec0adf1f3e7cbd885b2cf4c31f83e917d97a09a791af/diff:/var/lib/d
ocker/overlay2/ee74e2f91490ecb544a95b306f1001046f3c4656413878d09be8bf67de7b4c4f/diff:/var/lib/docker/overlay2/4279b3790ea6aeb262c4ecd9cf4aae5beb1430f4fbb599b49ff27d0f7b3a9714/diff:/var/lib/docker/overlay2/b7fd6a0c88249dbf5e233463fbe08559ca287465617e7721977a002204ea3af5/diff:/var/lib/docker/overlay2/c495a83eeda1cf6df33d49341ee01f15738845e6330c0a5b3c29e11fdc4733b0/diff:/var/lib/docker/overlay2/ac747f0260d49943953568bbbe150f3a4f28d70bd82f40d0485ef13b12195044/diff:/var/lib/docker/overlay2/aa98d62ac831ecd60bc1acfa1708c0648c306bb7fa187026b472e9ae5c3364a4/diff:/var/lib/docker/overlay2/34829b132a53df856a1be03aa46565640e20cb075db18bd9775a5055fe0c0b22/diff:/var/lib/docker/overlay2/85a074fe6f79f3ea9d8b2f628355f41bb4f73b398257f8b6659bc171d86a0736/diff:/var/lib/docker/overlay2/c8c145d2e68e655880cd5c8fae8cb9f7cbd6b112f1f64fced224b17d4f60fbc7/diff:/var/lib/docker/overlay2/7480ad16aa2479be3569dd07eca685bc3a37a785e7ff281c448c7ca718cc67c3/diff:/var/lib/docker/overlay2/519f1304b1b8ee2daf8c1b9411f3e46d4fedacc8d6446937321372c4e8d
f2cb9/diff:/var/lib/docker/overlay2/246fcb20bef1dbfdc41186d1b7143566cd571a067830cc3f946b232024c2e85c/diff:/var/lib/docker/overlay2/f5f15e6d497abc56d9a2d901ed821a56e6f3effe2fc8d6c3ef64297faea15179/diff:/var/lib/docker/overlay2/3aa1fb1105e860c53ef63317f6757f9629a4a20f35764d976df2b0f0cee5d4f2/diff:/var/lib/docker/overlay2/765f7cba41acbb266d2cef89f2a76a5659b78c3b075223bf23257ac44acfe177/diff:/var/lib/docker/overlay2/53179410fe05d9ddea0a22ba2c123ca8e75f9c7839c2a64902e411e2bda2de23/diff",
"MergedDir": "/var/lib/docker/overlay2/d7bbc23e4363abd5f8bd0174d71334b755b7a86aae7b28260349c3000af1c495/merged",
"UpperDir": "/var/lib/docker/overlay2/d7bbc23e4363abd5f8bd0174d71334b755b7a86aae7b28260349c3000af1c495/diff",
"WorkDir": "/var/lib/docker/overlay2/d7bbc23e4363abd5f8bd0174d71334b755b7a86aae7b28260349c3000af1c495/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "addons-20210811003021-1387367",
"Source": "/var/lib/docker/volumes/addons-20210811003021-1387367/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "addons-20210811003021-1387367",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-20210811003021-1387367",
"name.minikube.sigs.k8s.io": "addons-20210811003021-1387367",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "17fbfc5588d01a72b28ff1d6c58d2e4bb8f2d21449a18677b10dd71b3b83ded4",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "50250"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "50249"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "50246"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "50248"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "50247"
}
]
},
"SandboxKey": "/var/run/docker/netns/17fbfc5588d0",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-20210811003021-1387367": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"5aa46682b774",
"addons-20210811003021-1387367"
],
"NetworkID": "6dba5b957173120a4aafdf3873eab586b4a4a9b5791668afbe348cef17103048",
"EndpointID": "d99047e7dbe6428356a66d026486a85ca7cdfff3ea6f120c69d9470809fd105b",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:240: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p addons-20210811003021-1387367 -n addons-20210811003021-1387367
helpers_test.go:245: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 -p addons-20210811003021-1387367 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p addons-20210811003021-1387367 logs -n 25: (1.572386229s)
helpers_test.go:253: TestAddons/parallel/Registry logs:
-- stdout --
*
* ==> Audit <==
* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
| delete | --all | download-only-20210811002935-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:30:07 UTC | Wed, 11 Aug 2021 00:30:07 UTC |
| delete | -p | download-only-20210811002935-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:30:07 UTC | Wed, 11 Aug 2021 00:30:07 UTC |
| | download-only-20210811002935-1387367 | | | | | |
| delete | -p | download-only-20210811002935-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:30:07 UTC | Wed, 11 Aug 2021 00:30:08 UTC |
| | download-only-20210811002935-1387367 | | | | | |
| delete | -p | download-docker-20210811003008-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:30:21 UTC | Wed, 11 Aug 2021 00:30:21 UTC |
| | download-docker-20210811003008-1387367 | | | | | |
| start | -p | addons-20210811003021-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:30:21 UTC | Wed, 11 Aug 2021 00:32:41 UTC |
| | addons-20210811003021-1387367 | | | | | |
| | --wait=true --memory=4000 | | | | | |
| | --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=olm | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=gcp-auth | | | | | |
| -p | addons-20210811003021-1387367 | addons-20210811003021-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:32:54 UTC | Wed, 11 Aug 2021 00:32:54 UTC |
| | ip | | | | | |
| -p | addons-20210811003021-1387367 | addons-20210811003021-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:35:32 UTC | Wed, 11 Aug 2021 00:35:32 UTC |
| | addons disable registry | | | | | |
| | --alsologtostderr -v=1 | | | | | |
|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2021/08/11 00:30:21
Running on machine: ip-172-31-31-251
Binary: Built with gc go1.16.7 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0811 00:30:21.602659 1387850 out.go:298] Setting OutFile to fd 1 ...
I0811 00:30:21.602845 1387850 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0811 00:30:21.602855 1387850 out.go:311] Setting ErrFile to fd 2...
I0811 00:30:21.602859 1387850 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0811 00:30:21.603002 1387850 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
I0811 00:30:21.603313 1387850 out.go:305] Setting JSON to false
I0811 00:30:21.604120 1387850 start.go:111] hostinfo: {"hostname":"ip-172-31-31-251","uptime":36768,"bootTime":1628605053,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I0811 00:30:21.604207 1387850 start.go:121] virtualization:
I0811 00:30:21.607468 1387850 out.go:177] * [addons-20210811003021-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)
I0811 00:30:21.611463 1387850 out.go:177] - MINIKUBE_LOCATION=12230
I0811 00:30:21.609464 1387850 notify.go:169] Checking for updates...
I0811 00:30:21.615278 1387850 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
I0811 00:30:21.618400 1387850 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
I0811 00:30:21.621705 1387850 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0811 00:30:21.621941 1387850 driver.go:335] Setting default libvirt URI to qemu:///system
I0811 00:30:21.658581 1387850 docker.go:132] docker version: linux-20.10.8
I0811 00:30:21.658691 1387850 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0811 00:30:21.762135 1387850 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 00:30:21.69832939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
I0811 00:30:21.762295 1387850 docker.go:244] overlay module found
I0811 00:30:21.764908 1387850 out.go:177] * Using the docker driver based on user configuration
I0811 00:30:21.764929 1387850 start.go:278] selected driver: docker
I0811 00:30:21.764934 1387850 start.go:751] validating driver "docker" against <nil>
I0811 00:30:21.764951 1387850 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
W0811 00:30:21.765000 1387850 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0811 00:30:21.765023 1387850 out.go:242] ! Your cgroup does not allow setting memory.
I0811 00:30:21.767459 1387850 out.go:177] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0811 00:30:21.767848 1387850 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0811 00:30:21.854139 1387850 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 00:30:21.794641916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
I0811 00:30:21.854262 1387850 start_flags.go:263] no existing cluster config was found, will generate one from the flags
I0811 00:30:21.854419 1387850 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0811 00:30:21.854442 1387850 cni.go:93] Creating CNI manager for ""
I0811 00:30:21.854449 1387850 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0811 00:30:21.854458 1387850 start_flags.go:277] config:
{Name:addons-20210811003021-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210811003021-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0811 00:30:21.856879 1387850 out.go:177] * Starting control plane node addons-20210811003021-1387367 in cluster addons-20210811003021-1387367
I0811 00:30:21.856928 1387850 cache.go:117] Beginning downloading kic base image for docker with docker
I0811 00:30:21.858843 1387850 out.go:177] * Pulling base image ...
I0811 00:30:21.858881 1387850 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
I0811 00:30:21.858920 1387850 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4
I0811 00:30:21.858936 1387850 cache.go:56] Caching tarball of preloaded images
I0811 00:30:21.859099 1387850 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0811 00:30:21.859124 1387850 cache.go:59] Finished verifying existence of preloaded tar for v1.21.3 on docker
I0811 00:30:21.859416 1387850 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/config.json ...
I0811 00:30:21.859452 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/config.json: {Name:mkad62a8ef7b1cb9eac286f0a4233efc658a409a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0811 00:30:21.859624 1387850 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
I0811 00:30:21.914689 1387850 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
I0811 00:30:21.914718 1387850 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
I0811 00:30:21.914731 1387850 cache.go:205] Successfully downloaded all kic artifacts
I0811 00:30:21.914776 1387850 start.go:313] acquiring machines lock for addons-20210811003021-1387367: {Name:mk226548caa021fe6ed2b9069936448c3d09f345 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0811 00:30:21.914932 1387850 start.go:317] acquired machines lock for "addons-20210811003021-1387367" in 132.463µs
I0811 00:30:21.914971 1387850 start.go:89] Provisioning new machine with config: &{Name:addons-20210811003021-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210811003021-1387367 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
I0811 00:30:21.915061 1387850 start.go:126] createHost starting for "" (driver="docker")
I0811 00:30:21.917526 1387850 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0811 00:30:21.917773 1387850 start.go:160] libmachine.API.Create for "addons-20210811003021-1387367" (driver="docker")
I0811 00:30:21.917815 1387850 client.go:168] LocalClient.Create starting
I0811 00:30:21.917923 1387850 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
I0811 00:30:22.339798 1387850 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
I0811 00:30:22.974163 1387850 cli_runner.go:115] Run: docker network inspect addons-20210811003021-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0811 00:30:23.003309 1387850 cli_runner.go:162] docker network inspect addons-20210811003021-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0811 00:30:23.003391 1387850 network_create.go:255] running [docker network inspect addons-20210811003021-1387367] to gather additional debugging logs...
I0811 00:30:23.003413 1387850 cli_runner.go:115] Run: docker network inspect addons-20210811003021-1387367
W0811 00:30:23.032304 1387850 cli_runner.go:162] docker network inspect addons-20210811003021-1387367 returned with exit code 1
I0811 00:30:23.032336 1387850 network_create.go:258] error running [docker network inspect addons-20210811003021-1387367]: docker network inspect addons-20210811003021-1387367: exit status 1
stdout:
[]
stderr:
Error: No such network: addons-20210811003021-1387367
I0811 00:30:23.032348 1387850 network_create.go:260] output of [docker network inspect addons-20210811003021-1387367]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: addons-20210811003021-1387367
** /stderr **
I0811 00:30:23.032405 1387850 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0811 00:30:23.062238 1387850 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x40000d7398] misses:0}
I0811 00:30:23.062294 1387850 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0811 00:30:23.062314 1387850 network_create.go:106] attempt to create docker network addons-20210811003021-1387367 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0811 00:30:23.062373 1387850 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210811003021-1387367
I0811 00:30:23.131311 1387850 network_create.go:90] docker network addons-20210811003021-1387367 192.168.49.0/24 created
I0811 00:30:23.131341 1387850 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210811003021-1387367" container
I0811 00:30:23.131409 1387850 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0811 00:30:23.160364 1387850 cli_runner.go:115] Run: docker volume create addons-20210811003021-1387367 --label name.minikube.sigs.k8s.io=addons-20210811003021-1387367 --label created_by.minikube.sigs.k8s.io=true
I0811 00:30:23.190804 1387850 oci.go:102] Successfully created a docker volume addons-20210811003021-1387367
I0811 00:30:23.190897 1387850 cli_runner.go:115] Run: docker run --rm --name addons-20210811003021-1387367-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210811003021-1387367 --entrypoint /usr/bin/test -v addons-20210811003021-1387367:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
I0811 00:30:25.611528 1387850 cli_runner.go:168] Completed: docker run --rm --name addons-20210811003021-1387367-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210811003021-1387367 --entrypoint /usr/bin/test -v addons-20210811003021-1387367:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib: (2.420589766s)
I0811 00:30:25.611562 1387850 oci.go:106] Successfully prepared a docker volume addons-20210811003021-1387367
W0811 00:30:25.611598 1387850 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0811 00:30:25.611608 1387850 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0811 00:30:25.611675 1387850 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0811 00:30:25.611691 1387850 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
I0811 00:30:25.611714 1387850 kic.go:179] Starting extracting preloaded images to volume ...
I0811 00:30:25.611770 1387850 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210811003021-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
I0811 00:30:25.746101 1387850 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210811003021-1387367 --name addons-20210811003021-1387367 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210811003021-1387367 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210811003021-1387367 --network addons-20210811003021-1387367 --ip 192.168.49.2 --volume addons-20210811003021-1387367:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
I0811 00:30:26.279482 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Running}}
I0811 00:30:26.347407 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
I0811 00:30:26.400431 1387850 cli_runner.go:115] Run: docker exec addons-20210811003021-1387367 stat /var/lib/dpkg/alternatives/iptables
I0811 00:30:26.499917 1387850 oci.go:278] the created container "addons-20210811003021-1387367" has a running status.
I0811 00:30:26.499948 1387850 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa...
I0811 00:30:26.732383 1387850 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0811 00:30:26.881674 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
I0811 00:30:26.918020 1387850 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0811 00:30:26.918042 1387850 kic_runner.go:115] Args: [docker exec --privileged addons-20210811003021-1387367 chown docker:docker /home/docker/.ssh/authorized_keys]
I0811 00:30:35.641601 1387850 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210811003021-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (10.02979324s)
I0811 00:30:35.641632 1387850 kic.go:188] duration metric: took 10.029915 seconds to extract preloaded images to volume
I0811 00:30:35.641709 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
I0811 00:30:35.681545 1387850 machine.go:88] provisioning docker machine ...
I0811 00:30:35.681590 1387850 ubuntu.go:169] provisioning hostname "addons-20210811003021-1387367"
I0811 00:30:35.681654 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:30:35.724584 1387850 main.go:130] libmachine: Using SSH client type: native
I0811 00:30:35.724791 1387850 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil> [] 0s} 127.0.0.1 50250 <nil> <nil>}
I0811 00:30:35.724811 1387850 main.go:130] libmachine: About to run SSH command:
sudo hostname addons-20210811003021-1387367 && echo "addons-20210811003021-1387367" | sudo tee /etc/hostname
I0811 00:30:35.855478 1387850 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210811003021-1387367
I0811 00:30:35.855550 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:30:35.892128 1387850 main.go:130] libmachine: Using SSH client type: native
I0811 00:30:35.892309 1387850 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil> [] 0s} 127.0.0.1 50250 <nil> <nil>}
I0811 00:30:35.892335 1387850 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-20210811003021-1387367' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210811003021-1387367/g' /etc/hosts;
else
echo '127.0.1.1 addons-20210811003021-1387367' | sudo tee -a /etc/hosts;
fi
fi
I0811 00:30:36.016702 1387850 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0811 00:30:36.016728 1387850 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/k
ey.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
I0811 00:30:36.016752 1387850 ubuntu.go:177] setting up certificates
I0811 00:30:36.016760 1387850 provision.go:83] configureAuth start
I0811 00:30:36.016819 1387850 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210811003021-1387367
I0811 00:30:36.046617 1387850 provision.go:137] copyHostCerts
I0811 00:30:36.046706 1387850 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
I0811 00:30:36.046821 1387850 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
I0811 00:30:36.046895 1387850 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
I0811 00:30:36.046947 1387850 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.addons-20210811003021-1387367 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210811003021-1387367]
I0811 00:30:36.901481 1387850 provision.go:171] copyRemoteCerts
I0811 00:30:36.901548 1387850 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0811 00:30:36.901597 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:30:36.932010 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
I0811 00:30:37.015797 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0811 00:30:37.032008 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
I0811 00:30:37.048411 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0811 00:30:37.064819 1387850 provision.go:86] duration metric: configureAuth took 1.048044188s
I0811 00:30:37.064842 1387850 ubuntu.go:193] setting minikube options for container-runtime
I0811 00:30:37.065077 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:30:37.094964 1387850 main.go:130] libmachine: Using SSH client type: native
I0811 00:30:37.095136 1387850 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil> [] 0s} 127.0.0.1 50250 <nil> <nil>}
I0811 00:30:37.095153 1387850 main.go:130] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0811 00:30:37.212966 1387850 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
I0811 00:30:37.212986 1387850 ubuntu.go:71] root file system type: overlay
I0811 00:30:37.213159 1387850 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
I0811 00:30:37.213224 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:30:37.243079 1387850 main.go:130] libmachine: Using SSH client type: native
I0811 00:30:37.243251 1387850 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil> [] 0s} 127.0.0.1 50250 <nil> <nil>}
I0811 00:30:37.243366 1387850 main.go:130] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0811 00:30:37.365398 1387850 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0811 00:30:37.365479 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:30:37.396410 1387850 main.go:130] libmachine: Using SSH client type: native
I0811 00:30:37.396581 1387850 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil> [] 0s} 127.0.0.1 50250 <nil> <nil>}
I0811 00:30:37.396607 1387850 main.go:130] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0811 00:30:38.259628 1387850 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2021-06-02 11:55:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2021-08-11 00:30:37.360623318 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0811 00:30:38.259655 1387850 machine.go:91] provisioned docker machine in 2.578088023s
I0811 00:30:38.259665 1387850 client.go:171] LocalClient.Create took 16.341840918s
I0811 00:30:38.259674 1387850 start.go:168] duration metric: libmachine.API.Create for "addons-20210811003021-1387367" took 16.341902554s
I0811 00:30:38.259682 1387850 start.go:267] post-start starting for "addons-20210811003021-1387367" (driver="docker")
I0811 00:30:38.259696 1387850 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0811 00:30:38.259758 1387850 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0811 00:30:38.259813 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:30:38.298448 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
I0811 00:30:38.384125 1387850 ssh_runner.go:149] Run: cat /etc/os-release
I0811 00:30:38.386661 1387850 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0811 00:30:38.386687 1387850 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0811 00:30:38.386698 1387850 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0811 00:30:38.386705 1387850 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0811 00:30:38.386715 1387850 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
I0811 00:30:38.386779 1387850 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
I0811 00:30:38.386806 1387850 start.go:270] post-start completed in 127.109195ms
I0811 00:30:38.387133 1387850 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210811003021-1387367
I0811 00:30:38.416894 1387850 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/config.json ...
I0811 00:30:38.417167 1387850 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0811 00:30:38.417220 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:30:38.446953 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
I0811 00:30:38.529083 1387850 start.go:129] duration metric: createHost completed in 16.614007292s
I0811 00:30:38.529119 1387850 start.go:80] releasing machines lock for "addons-20210811003021-1387367", held for 16.614173157s
I0811 00:30:38.529201 1387850 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210811003021-1387367
I0811 00:30:38.558592 1387850 ssh_runner.go:149] Run: systemctl --version
I0811 00:30:38.558641 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:30:38.558656 1387850 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0811 00:30:38.558720 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:30:38.594358 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
I0811 00:30:38.601093 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
I0811 00:30:38.830574 1387850 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0811 00:30:38.840501 1387850 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0811 00:30:38.851219 1387850 cruntime.go:249] skipping containerd shutdown because we are bound to it
I0811 00:30:38.851291 1387850 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0811 00:30:38.861277 1387850 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0811 00:30:38.874263 1387850 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
I0811 00:30:38.958499 1387850 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
I0811 00:30:39.047217 1387850 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0811 00:30:39.056705 1387850 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0811 00:30:39.146104 1387850 ssh_runner.go:149] Run: sudo systemctl start docker
I0811 00:30:39.155707 1387850 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0811 00:30:39.205950 1387850 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0811 00:30:39.260548 1387850 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
I0811 00:30:39.260677 1387850 cli_runner.go:115] Run: docker network inspect addons-20210811003021-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0811 00:30:39.290146 1387850 ssh_runner.go:149] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0811 00:30:39.293407 1387850 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0811 00:30:39.302229 1387850 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
I0811 00:30:39.302303 1387850 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0811 00:30:39.341446 1387850 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-scheduler:v1.21.3
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.4.1
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/coredns/coredns:v1.8.0
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/metrics-scraper:v1.0.4
-- /stdout --
I0811 00:30:39.341473 1387850 docker.go:466] Images already preloaded, skipping extraction
I0811 00:30:39.341528 1387850 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0811 00:30:39.380996 1387850 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-scheduler:v1.21.3
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.4.1
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/coredns/coredns:v1.8.0
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/metrics-scraper:v1.0.4
-- /stdout --
I0811 00:30:39.381035 1387850 cache_images.go:74] Images are preloaded, skipping loading
I0811 00:30:39.381093 1387850 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
I0811 00:30:39.515442 1387850 cni.go:93] Creating CNI manager for ""
I0811 00:30:39.515466 1387850 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0811 00:30:39.515474 1387850 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0811 00:30:39.515487 1387850 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210811003021-1387367 NodeName:addons-20210811003021-1387367 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/
lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0811 00:30:39.515632 1387850 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "addons-20210811003021-1387367"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.21.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0811 00:30:39.515719 1387850 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=addons-20210811003021-1387367 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.21.3 ClusterName:addons-20210811003021-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0811 00:30:39.515790 1387850 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
I0811 00:30:39.524221 1387850 binaries.go:44] Found k8s binaries, skipping transfer
I0811 00:30:39.524290 1387850 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0811 00:30:39.530941 1387850 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
I0811 00:30:39.543732 1387850 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0811 00:30:39.556462 1387850 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2072 bytes)
I0811 00:30:39.568807 1387850 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0811 00:30:39.572672 1387850 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0811 00:30:39.581434 1387850 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367 for IP: 192.168.49.2
I0811 00:30:39.581481 1387850 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
I0811 00:30:40.153609 1387850 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt ...
I0811 00:30:40.153643 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt: {Name:mk59a57628b7830e6da9d2ae7e8c01cd5efde140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0811 00:30:40.153894 1387850 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key ...
I0811 00:30:40.153911 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key: {Name:mk96e056b1cd3dc0b43035730f08908c26c31fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0811 00:30:40.154044 1387850 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
I0811 00:30:40.471227 1387850 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt ...
I0811 00:30:40.471263 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt: {Name:mkfd778913fc3b0da592cfc8a7d08059e895c701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0811 00:30:40.471472 1387850 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key ...
I0811 00:30:40.471492 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key: {Name:mk0ce74341fb606236ed0d73a79e2c5cede7537d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0811 00:30:40.471637 1387850 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.key
I0811 00:30:40.471650 1387850 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt with IP's: []
I0811 00:30:40.932035 1387850 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt ...
I0811 00:30:40.932074 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: {Name:mk9fa1e098b232414d6313e801fa75c86c1d49bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0811 00:30:40.932328 1387850 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.key ...
I0811 00:30:40.932348 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.key: {Name:mkfe24cba1294c2a137e1fca2c7855f1633fb7e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0811 00:30:40.932465 1387850 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key.dd3b5fb2
I0811 00:30:40.932477 1387850 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0811 00:30:41.378481 1387850 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt.dd3b5fb2 ...
I0811 00:30:41.378518 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt.dd3b5fb2: {Name:mk61de60fd373ccc807bd5cda384447d381e8be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0811 00:30:41.378737 1387850 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key.dd3b5fb2 ...
I0811 00:30:41.378752 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key.dd3b5fb2: {Name:mk28ad1051189a18b59148562d5150391e295b32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0811 00:30:41.378851 1387850 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt
I0811 00:30:41.378911 1387850 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key
I0811 00:30:41.378968 1387850 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.key
I0811 00:30:41.378981 1387850 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.crt with IP's: []
I0811 00:30:42.573038 1387850 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.crt ...
I0811 00:30:42.573080 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.crt: {Name:mk0190b4814f268c32de2db03fd82b7d16622974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0811 00:30:42.573306 1387850 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.key ...
I0811 00:30:42.573323 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.key: {Name:mkb9c7131f1d68ca2e257df72147ba667f820217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0811 00:30:42.573512 1387850 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
I0811 00:30:42.573555 1387850 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
I0811 00:30:42.573587 1387850 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
I0811 00:30:42.573617 1387850 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
I0811 00:30:42.574683 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0811 00:30:42.592943 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0811 00:30:42.609946 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0811 00:30:42.626691 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0811 00:30:42.643759 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0811 00:30:42.660446 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0811 00:30:42.677226 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0811 00:30:42.693943 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0811 00:30:42.711059 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0811 00:30:42.727916 1387850 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0811 00:30:42.740400 1387850 ssh_runner.go:149] Run: openssl version
I0811 00:30:42.746610 1387850 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0811 00:30:42.755297 1387850 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0811 00:30:42.758347 1387850 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 11 00:30 /usr/share/ca-certificates/minikubeCA.pem
I0811 00:30:42.758400 1387850 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0811 00:30:42.763252 1387850 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0811 00:30:42.770353 1387850 kubeadm.go:390] StartCluster: {Name:addons-20210811003021-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210811003021-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0811 00:30:42.770495 1387850 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0811 00:30:42.809002 1387850 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0811 00:30:42.816207 1387850 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0811 00:30:42.822961 1387850 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0811 00:30:42.823066 1387850 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0811 00:30:42.830328 1387850 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0811 00:30:42.830370 1387850 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0811 00:30:43.619917 1387850 out.go:204] - Generating certificates and keys ...
I0811 00:30:49.880691 1387850 out.go:204] - Booting up control plane ...
I0811 00:31:06.451215 1387850 out.go:204] - Configuring RBAC rules ...
I0811 00:31:06.874304 1387850 cni.go:93] Creating CNI manager for ""
I0811 00:31:06.874325 1387850 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0811 00:31:06.874348 1387850 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0811 00:31:06.874455 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:06.874510 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd minikube.k8s.io/name=addons-20210811003021-1387367 minikube.k8s.io/updated_at=2021_08_11T00_31_06_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:07.395364 1387850 ops.go:34] apiserver oom_adj: -16
I0811 00:31:07.395478 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:07.985651 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:08.485872 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:08.985765 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:09.485151 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:09.985899 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:10.485129 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:10.985624 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:11.485105 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:11.985253 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:12.485152 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:12.985351 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:13.485134 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:13.986075 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:14.485781 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:14.985900 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:15.485861 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:15.986014 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:16.485778 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:16.985653 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:17.485947 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:17.985276 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:18.485896 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:18.985977 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:19.485799 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:19.985256 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:20.485459 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0811 00:31:20.663342 1387850 kubeadm.go:985] duration metric: took 13.78893335s to wait for elevateKubeSystemPrivileges.
I0811 00:31:20.663367 1387850 kubeadm.go:392] StartCluster complete in 37.893022782s
I0811 00:31:20.663382 1387850 settings.go:142] acquiring lock: {Name:mk6e7f1e95cc0d18801bf31166529399345d1e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0811 00:31:20.663521 1387850 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
I0811 00:31:20.663950 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mka174137207b71bb699e0c641682c96161f87c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0811 00:31:21.189383 1387850 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210811003021-1387367" rescaled to 1
I0811 00:31:21.189462 1387850 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
I0811 00:31:21.193170 1387850 out.go:177] * Verifying Kubernetes components...
I0811 00:31:21.193243 1387850 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0811 00:31:21.189583 1387850 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0811 00:31:21.189906 1387850 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress gcp-auth]
I0811 00:31:21.193403 1387850 addons.go:59] Setting volumesnapshots=true in profile "addons-20210811003021-1387367"
I0811 00:31:21.193416 1387850 addons.go:135] Setting addon volumesnapshots=true in "addons-20210811003021-1387367"
I0811 00:31:21.193441 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
I0811 00:31:21.193953 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
I0811 00:31:21.194243 1387850 addons.go:59] Setting ingress=true in profile "addons-20210811003021-1387367"
I0811 00:31:21.194260 1387850 addons.go:135] Setting addon ingress=true in "addons-20210811003021-1387367"
I0811 00:31:21.194284 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
I0811 00:31:21.194705 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
I0811 00:31:21.194767 1387850 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210811003021-1387367"
I0811 00:31:21.194790 1387850 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210811003021-1387367"
I0811 00:31:21.194811 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
I0811 00:31:21.195183 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
I0811 00:31:21.195236 1387850 addons.go:59] Setting default-storageclass=true in profile "addons-20210811003021-1387367"
I0811 00:31:21.195247 1387850 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210811003021-1387367"
I0811 00:31:21.195465 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
I0811 00:31:21.195519 1387850 addons.go:59] Setting gcp-auth=true in profile "addons-20210811003021-1387367"
I0811 00:31:21.195539 1387850 mustload.go:65] Loading cluster: addons-20210811003021-1387367
I0811 00:31:21.195857 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
I0811 00:31:21.195907 1387850 addons.go:59] Setting olm=true in profile "addons-20210811003021-1387367"
I0811 00:31:21.195915 1387850 addons.go:135] Setting addon olm=true in "addons-20210811003021-1387367"
I0811 00:31:21.195931 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
I0811 00:31:21.196301 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
I0811 00:31:21.196350 1387850 addons.go:59] Setting metrics-server=true in profile "addons-20210811003021-1387367"
I0811 00:31:21.196358 1387850 addons.go:135] Setting addon metrics-server=true in "addons-20210811003021-1387367"
I0811 00:31:21.196372 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
I0811 00:31:21.196738 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
I0811 00:31:21.196790 1387850 addons.go:59] Setting registry=true in profile "addons-20210811003021-1387367"
I0811 00:31:21.196797 1387850 addons.go:135] Setting addon registry=true in "addons-20210811003021-1387367"
I0811 00:31:21.196812 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
I0811 00:31:21.197403 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
I0811 00:31:21.197413 1387850 addons.go:59] Setting storage-provisioner=true in profile "addons-20210811003021-1387367"
I0811 00:31:21.197526 1387850 addons.go:135] Setting addon storage-provisioner=true in "addons-20210811003021-1387367"
W0811 00:31:21.197549 1387850 addons.go:147] addon storage-provisioner should already be in state true
I0811 00:31:21.197579 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
I0811 00:31:21.198079 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
I0811 00:31:21.316737 1387850 out.go:177] - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
I0811 00:31:21.318960 1387850 out.go:177] - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
I0811 00:31:21.321029 1387850 out.go:177] - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
I0811 00:31:21.321081 1387850 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
I0811 00:31:21.321090 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
I0811 00:31:21.321153 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:31:21.393126 1387850 out.go:177] - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
I0811 00:31:21.393208 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0811 00:31:21.393219 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0811 00:31:21.393552 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:31:21.566994 1387850 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
I0811 00:31:21.570897 1387850 out.go:177] - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
I0811 00:31:21.573431 1387850 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
I0811 00:31:21.575941 1387850 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
I0811 00:31:21.577933 1387850 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
I0811 00:31:21.589807 1387850 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
I0811 00:31:21.590689 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
I0811 00:31:21.593774 1387850 addons.go:135] Setting addon default-storageclass=true in "addons-20210811003021-1387367"
W0811 00:31:21.593807 1387850 addons.go:147] addon default-storageclass should already be in state true
I0811 00:31:21.593832 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
I0811 00:31:21.594305 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
I0811 00:31:21.594464 1387850 out.go:177] - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
I0811 00:31:21.602078 1387850 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
I0811 00:31:21.594805 1387850 out.go:177] - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
I0811 00:31:21.613391 1387850 out.go:177] - Using image quay.io/operator-framework/olm:v0.17.0
I0811 00:31:21.610752 1387850 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
I0811 00:31:21.634506 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0811 00:31:21.634522 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0811 00:31:21.634580 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:31:21.610760 1387850 out.go:177] - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
I0811 00:31:21.635648 1387850 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0811 00:31:21.635657 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
I0811 00:31:21.635701 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:31:21.644033 1387850 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0811 00:31:21.644148 1387850 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0811 00:31:21.644157 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0811 00:31:21.644215 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:31:21.659467 1387850 out.go:177] - Using image registry:2.7.1
I0811 00:31:21.663767 1387850 out.go:177] - Using image gcr.io/google_containers/kube-registry-proxy:0.4
I0811 00:31:21.665207 1387850 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
I0811 00:31:21.665236 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
I0811 00:31:21.665321 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:31:21.715375 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
I0811 00:31:21.740338 1387850 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0811 00:31:21.747616 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:31:21.767963 1387850 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
I0811 00:31:21.767996 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
I0811 00:31:21.768070 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:31:21.821081 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
I0811 00:31:21.893062 1387850 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0811 00:31:21.894732 1387850 node_ready.go:35] waiting up to 6m0s for node "addons-20210811003021-1387367" to be "Ready" ...
I0811 00:31:21.900813 1387850 node_ready.go:49] node "addons-20210811003021-1387367" has status "Ready":"True"
I0811 00:31:21.900877 1387850 node_ready.go:38] duration metric: took 6.121847ms waiting for node "addons-20210811003021-1387367" to be "Ready" ...
I0811 00:31:21.900891 1387850 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0811 00:31:21.970648 1387850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace to be "Ready" ...
I0811 00:31:21.971138 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
I0811 00:31:21.988186 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
I0811 00:31:21.998543 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
I0811 00:31:22.022093 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
I0811 00:31:22.024598 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
I0811 00:31:22.025389 1387850 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
I0811 00:31:22.025403 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0811 00:31:22.025452 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:31:22.089084 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
I0811 00:31:22.145093 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
I0811 00:31:22.189558 1387850 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0811 00:31:22.189581 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0811 00:31:22.302897 1387850 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
I0811 00:31:22.302958 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
I0811 00:31:22.410765 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0811 00:31:22.422948 1387850 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0811 00:31:22.423015 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0811 00:31:22.426280 1387850 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0811 00:31:22.432260 1387850 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
I0811 00:31:22.432318 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
I0811 00:31:22.436077 1387850 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
I0811 00:31:22.436129 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0811 00:31:22.444025 1387850 addons.go:135] Setting addon gcp-auth=true in "addons-20210811003021-1387367"
I0811 00:31:22.444083 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
I0811 00:31:22.444621 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
I0811 00:31:22.494171 1387850 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0811 00:31:22.494196 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
I0811 00:31:22.507387 1387850 out.go:177] - Using image jettech/kube-webhook-certgen:v1.3.0
I0811 00:31:22.509963 1387850 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.6
I0811 00:31:22.510021 1387850 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0811 00:31:22.510031 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0811 00:31:22.510090 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
I0811 00:31:22.537941 1387850 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0811 00:31:22.537962 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
I0811 00:31:22.559492 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0811 00:31:22.559512 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
I0811 00:31:22.566425 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
I0811 00:31:22.567873 1387850 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
I0811 00:31:22.567891 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
I0811 00:31:22.624805 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
I0811 00:31:22.624829 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
I0811 00:31:22.720992 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
I0811 00:31:22.724130 1387850 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0811 00:31:22.724148 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
I0811 00:31:22.727176 1387850 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
I0811 00:31:22.727192 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
I0811 00:31:22.729946 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0811 00:31:22.764118 1387850 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0811 00:31:22.764137 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
I0811 00:31:22.774485 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
I0811 00:31:22.812352 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0811 00:31:22.812417 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
I0811 00:31:22.870611 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0811 00:31:22.917662 1387850 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0811 00:31:22.917720 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
I0811 00:31:22.943400 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0811 00:31:23.037187 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0811 00:31:23.037246 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
I0811 00:31:23.100781 1387850 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0811 00:31:23.100840 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (770 bytes)
I0811 00:31:23.124682 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0811 00:31:23.217383 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0811 00:31:23.217443 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
I0811 00:31:23.268107 1387850 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0811 00:31:23.268163 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4755 bytes)
I0811 00:31:23.349221 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0811 00:31:23.349241 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
I0811 00:31:23.433746 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0811 00:31:23.466200 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0811 00:31:23.466266 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
I0811 00:31:23.568633 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0811 00:31:23.568690 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
I0811 00:31:23.750358 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0811 00:31:23.750414 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
I0811 00:31:23.791988 1387850 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.898894924s)
I0811 00:31:23.792052 1387850 start.go:736] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
I0811 00:31:23.955311 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
I0811 00:31:23.955374 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
I0811 00:31:23.956419 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.545593968s)
I0811 00:31:24.082471 1387850 pod_ready.go:102] pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace has status "Ready":"False"
I0811 00:31:24.160978 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0811 00:31:24.161066 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
I0811 00:31:24.203405 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
I0811 00:31:24.203429 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
I0811 00:31:24.411828 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0811 00:31:24.411854 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0811 00:31:24.432074 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0811 00:31:26.152298 1387850 pod_ready.go:102] pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace has status "Ready":"False"
I0811 00:31:28.575273 1387850 pod_ready.go:102] pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace has status "Ready":"False"
I0811 00:31:31.053360 1387850 pod_ready.go:102] pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace has status "Ready":"False"
I0811 00:31:31.587406 1387850 pod_ready.go:97] error getting pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-5wk4c" not found
I0811 00:31:31.587437 1387850 pod_ready.go:81] duration metric: took 9.616760181s waiting for pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace to be "Ready" ...
E0811 00:31:31.587449 1387850 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-5wk4c" not found
I0811 00:31:31.587458 1387850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-j4xjh" in "kube-system" namespace to be "Ready" ...
I0811 00:31:31.759414 1387850 pod_ready.go:92] pod "coredns-558bd4d5db-j4xjh" in "kube-system" namespace has status "Ready":"True"
I0811 00:31:31.759439 1387850 pod_ready.go:81] duration metric: took 171.972167ms waiting for pod "coredns-558bd4d5db-j4xjh" in "kube-system" namespace to be "Ready" ...
I0811 00:31:31.759450 1387850 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
I0811 00:31:31.874331 1387850 pod_ready.go:92] pod "etcd-addons-20210811003021-1387367" in "kube-system" namespace has status "Ready":"True"
I0811 00:31:31.874356 1387850 pod_ready.go:81] duration metric: took 114.898034ms waiting for pod "etcd-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
I0811 00:31:31.874369 1387850 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
I0811 00:31:31.877164 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.147157564s)
I0811 00:31:31.877240 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (9.156226911s)
W0811 00:31:31.877276 1387850 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
namespace/olm created
namespace/operators created
serviceaccount/olm-operator-serviceaccount created
clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
deployment.apps/olm-operator created
deployment.apps/catalog-operator created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
stderr:
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
I0811 00:31:31.877292 1387850 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
namespace/olm created
namespace/operators created
serviceaccount/olm-operator-serviceaccount created
clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
deployment.apps/olm-operator created
deployment.apps/catalog-operator created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
stderr:
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
I0811 00:31:31.877402 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (9.102897402s)
I0811 00:31:31.877416 1387850 addons.go:313] Verifying addon ingress=true in "addons-20210811003021-1387367"
I0811 00:31:31.887205 1387850 out.go:177] * Verifying ingress addon...
I0811 00:31:31.889037 1387850 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0811 00:31:31.877748 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.007067811s)
W0811 00:31:31.889240 1387850 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
I0811 00:31:31.889258 1387850 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
I0811 00:31:31.877785 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.934328602s)
I0811 00:31:31.889280 1387850 addons.go:313] Verifying addon registry=true in "addons-20210811003021-1387367"
I0811 00:31:31.877850 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.753144936s)
I0811 00:31:31.877905 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (8.444139497s)
I0811 00:31:31.878122 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.446019981s)
I0811 00:31:31.893287 1387850 addons.go:313] Verifying addon metrics-server=true in "addons-20210811003021-1387367"
I0811 00:31:31.893306 1387850 out.go:177] * Verifying registry addon...
I0811 00:31:31.894964 1387850 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0811 00:31:31.895140 1387850 addons.go:313] Verifying addon gcp-auth=true in "addons-20210811003021-1387367"
I0811 00:31:31.897634 1387850 out.go:177] * Verifying gcp-auth addon...
I0811 00:31:31.899263 1387850 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0811 00:31:31.893256 1387850 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210811003021-1387367"
I0811 00:31:31.902262 1387850 out.go:177] * Verifying csi-hostpath-driver addon...
I0811 00:31:31.903868 1387850 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0811 00:31:31.957269 1387850 pod_ready.go:92] pod "kube-apiserver-addons-20210811003021-1387367" in "kube-system" namespace has status "Ready":"True"
I0811 00:31:31.957291 1387850 pod_ready.go:81] duration metric: took 82.914978ms waiting for pod "kube-apiserver-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
I0811 00:31:31.957302 1387850 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
I0811 00:31:31.969760 1387850 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0811 00:31:31.969785 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:31.973452 1387850 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0811 00:31:31.973479 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:31.973901 1387850 kapi.go:86] Found 2 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0811 00:31:31.973912 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:31.974638 1387850 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0811 00:31:31.974650 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:31.983912 1387850 pod_ready.go:92] pod "kube-controller-manager-addons-20210811003021-1387367" in "kube-system" namespace has status "Ready":"True"
I0811 00:31:31.983934 1387850 pod_ready.go:81] duration metric: took 26.622888ms waiting for pod "kube-controller-manager-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
I0811 00:31:31.983947 1387850 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hbv8p" in "kube-system" namespace to be "Ready" ...
I0811 00:31:31.999660 1387850 pod_ready.go:92] pod "kube-proxy-hbv8p" in "kube-system" namespace has status "Ready":"True"
I0811 00:31:31.999681 1387850 pod_ready.go:81] duration metric: took 15.72646ms waiting for pod "kube-proxy-hbv8p" in "kube-system" namespace to be "Ready" ...
I0811 00:31:31.999692 1387850 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
I0811 00:31:32.134974 1387850 pod_ready.go:92] pod "kube-scheduler-addons-20210811003021-1387367" in "kube-system" namespace has status "Ready":"True"
I0811 00:31:32.134996 1387850 pod_ready.go:81] duration metric: took 135.293862ms waiting for pod "kube-scheduler-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
I0811 00:31:32.135007 1387850 pod_ready.go:38] duration metric: took 10.234102984s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0811 00:31:32.135022 1387850 api_server.go:50] waiting for apiserver process to appear ...
I0811 00:31:32.135065 1387850 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0811 00:31:32.157221 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
I0811 00:31:32.250282 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0811 00:31:32.278457 1387850 api_server.go:70] duration metric: took 11.088961421s to wait for apiserver process to appear ...
I0811 00:31:32.278478 1387850 api_server.go:86] waiting for apiserver healthz status ...
I0811 00:31:32.278488 1387850 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0811 00:31:32.307214 1387850 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
ok
I0811 00:31:32.308374 1387850 api_server.go:139] control plane version: v1.21.3
I0811 00:31:32.308394 1387850 api_server.go:129] duration metric: took 29.908897ms to wait for apiserver health ...
I0811 00:31:32.308401 1387850 system_pods.go:43] waiting for kube-system pods to appear ...
I0811 00:31:32.326005 1387850 system_pods.go:59] 17 kube-system pods found
I0811 00:31:32.326040 1387850 system_pods.go:61] "coredns-558bd4d5db-j4xjh" [f948b5ac-414e-4239-ad46-497ef8f75853] Running
I0811 00:31:32.326045 1387850 system_pods.go:61] "csi-hostpath-attacher-0" [2f0d8b28-ddaf-458b-b5b5-8b3c07c09415] Pending
I0811 00:31:32.326050 1387850 system_pods.go:61] "csi-hostpath-provisioner-0" [1ec66ec1-bc43-458c-aec5-9987f687ac44] Pending
I0811 00:31:32.326055 1387850 system_pods.go:61] "csi-hostpath-resizer-0" [79bc0e72-5889-4c3f-8670-8c2c53610472] Pending
I0811 00:31:32.326060 1387850 system_pods.go:61] "csi-hostpath-snapshotter-0" [adee0893-0da6-42b1-b77a-115426aeb95d] Pending
I0811 00:31:32.326065 1387850 system_pods.go:61] "csi-hostpathplugin-0" [6c1cecb2-45cd-41c0-b435-d9d52972488e] Pending
I0811 00:31:32.326070 1387850 system_pods.go:61] "etcd-addons-20210811003021-1387367" [66a09e0e-6be7-443c-8a42-6f5c84c19094] Running
I0811 00:31:32.326076 1387850 system_pods.go:61] "kube-apiserver-addons-20210811003021-1387367" [9691ed48-418f-4dad-8ac3-30d61a430bbf] Running
I0811 00:31:32.326085 1387850 system_pods.go:61] "kube-controller-manager-addons-20210811003021-1387367" [3b729013-1dc6-4788-9f3c-f7aa402e59e1] Running
I0811 00:31:32.326089 1387850 system_pods.go:61] "kube-proxy-hbv8p" [368541dc-ff39-4aee-af59-de331b32e889] Running
I0811 00:31:32.326099 1387850 system_pods.go:61] "kube-scheduler-addons-20210811003021-1387367" [44340cdb-fad4-460c-994c-cf7586c7cb72] Running
I0811 00:31:32.326106 1387850 system_pods.go:61] "metrics-server-77c99ccb96-7bz4t" [f135d883-ab80-4dd8-a141-333424152bcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0811 00:31:32.326114 1387850 system_pods.go:61] "registry-dzdlw" [4a872b2d-a2b1-46f9-9afd-c52b6647383f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0811 00:31:32.326124 1387850 system_pods.go:61] "registry-proxy-xfrxz" [19d31762-bc36-413f-8533-e97b57d38a28] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0811 00:31:32.326132 1387850 system_pods.go:61] "snapshot-controller-989f9ddc8-f8q5j" [b992001c-a1c6-4425-b360-98696726a82a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0811 00:31:32.326144 1387850 system_pods.go:61] "snapshot-controller-989f9ddc8-pjvmj" [9502385f-ad82-4081-bc88-a44d574dad9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0811 00:31:32.326150 1387850 system_pods.go:61] "storage-provisioner" [f5eb4d07-1355-48c6-aa1c-17031e9d86b9] Running
I0811 00:31:32.326160 1387850 system_pods.go:74] duration metric: took 17.753408ms to wait for pod list to return data ...
I0811 00:31:32.326168 1387850 default_sa.go:34] waiting for default service account to be created ...
I0811 00:31:32.473443 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:32.493205 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:32.493584 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:32.494370 1387850 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0811 00:31:32.494390 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:32.510135 1387850 default_sa.go:45] found service account: "default"
I0811 00:31:32.510161 1387850 default_sa.go:55] duration metric: took 183.984313ms for default service account to be created ...
I0811 00:31:32.510170 1387850 system_pods.go:116] waiting for k8s-apps to be running ...
I0811 00:31:32.777565 1387850 system_pods.go:86] 17 kube-system pods found
I0811 00:31:32.777597 1387850 system_pods.go:89] "coredns-558bd4d5db-j4xjh" [f948b5ac-414e-4239-ad46-497ef8f75853] Running
I0811 00:31:32.777605 1387850 system_pods.go:89] "csi-hostpath-attacher-0" [2f0d8b28-ddaf-458b-b5b5-8b3c07c09415] Pending
I0811 00:31:32.777610 1387850 system_pods.go:89] "csi-hostpath-provisioner-0" [1ec66ec1-bc43-458c-aec5-9987f687ac44] Pending
I0811 00:31:32.777615 1387850 system_pods.go:89] "csi-hostpath-resizer-0" [79bc0e72-5889-4c3f-8670-8c2c53610472] Pending
I0811 00:31:32.777620 1387850 system_pods.go:89] "csi-hostpath-snapshotter-0" [adee0893-0da6-42b1-b77a-115426aeb95d] Pending
I0811 00:31:32.777629 1387850 system_pods.go:89] "csi-hostpathplugin-0" [6c1cecb2-45cd-41c0-b435-d9d52972488e] Pending
I0811 00:31:32.777634 1387850 system_pods.go:89] "etcd-addons-20210811003021-1387367" [66a09e0e-6be7-443c-8a42-6f5c84c19094] Running
I0811 00:31:32.777645 1387850 system_pods.go:89] "kube-apiserver-addons-20210811003021-1387367" [9691ed48-418f-4dad-8ac3-30d61a430bbf] Running
I0811 00:31:32.777652 1387850 system_pods.go:89] "kube-controller-manager-addons-20210811003021-1387367" [3b729013-1dc6-4788-9f3c-f7aa402e59e1] Running
I0811 00:31:32.777661 1387850 system_pods.go:89] "kube-proxy-hbv8p" [368541dc-ff39-4aee-af59-de331b32e889] Running
I0811 00:31:32.777666 1387850 system_pods.go:89] "kube-scheduler-addons-20210811003021-1387367" [44340cdb-fad4-460c-994c-cf7586c7cb72] Running
I0811 00:31:32.777680 1387850 system_pods.go:89] "metrics-server-77c99ccb96-7bz4t" [f135d883-ab80-4dd8-a141-333424152bcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0811 00:31:32.777693 1387850 system_pods.go:89] "registry-dzdlw" [4a872b2d-a2b1-46f9-9afd-c52b6647383f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0811 00:31:32.777708 1387850 system_pods.go:89] "registry-proxy-xfrxz" [19d31762-bc36-413f-8533-e97b57d38a28] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0811 00:31:32.777716 1387850 system_pods.go:89] "snapshot-controller-989f9ddc8-f8q5j" [b992001c-a1c6-4425-b360-98696726a82a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0811 00:31:32.777724 1387850 system_pods.go:89] "snapshot-controller-989f9ddc8-pjvmj" [9502385f-ad82-4081-bc88-a44d574dad9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0811 00:31:32.777730 1387850 system_pods.go:89] "storage-provisioner" [f5eb4d07-1355-48c6-aa1c-17031e9d86b9] Running
I0811 00:31:32.777737 1387850 system_pods.go:126] duration metric: took 267.562785ms to wait for k8s-apps to be running ...
I0811 00:31:32.777744 1387850 system_svc.go:44] waiting for kubelet service to be running ....
I0811 00:31:32.777795 1387850 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0811 00:31:33.022664 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:33.043137 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:33.043695 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:33.076210 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:33.475631 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:33.487013 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:33.487474 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:33.488225 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:33.973986 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:33.978066 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:33.978643 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:33.982551 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:34.475328 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:34.484737 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:34.485574 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:34.491047 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:34.978754 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:34.986378 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:35.002282 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:35.003252 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:35.492406 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:35.498168 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:35.498801 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:35.507714 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:35.849572 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (3.692315583s)
I0811 00:31:35.849751 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.599438997s)
I0811 00:31:35.849801 1387850 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (3.07199149s)
I0811 00:31:35.849823 1387850 system_svc.go:56] duration metric: took 3.072076781s WaitForService to wait for kubelet.
I0811 00:31:35.849853 1387850 kubeadm.go:547] duration metric: took 14.66035318s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0811 00:31:35.849889 1387850 node_conditions.go:102] verifying NodePressure condition ...
I0811 00:31:35.856082 1387850 node_conditions.go:122] node storage ephemeral capacity is 60796312Ki
I0811 00:31:35.856156 1387850 node_conditions.go:123] node cpu capacity is 2
I0811 00:31:35.856183 1387850 node_conditions.go:105] duration metric: took 6.277447ms to run NodePressure ...
I0811 00:31:35.856204 1387850 start.go:231] waiting for startup goroutines ...
I0811 00:31:36.005256 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:36.013661 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:36.014789 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:36.015952 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:36.473926 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:36.483249 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:36.491648 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:36.492571 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:36.974069 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:36.986531 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:37.006446 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:37.013357 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:37.489649 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:37.490803 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:37.491434 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:37.504790 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:37.975170 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:37.989580 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:37.989802 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:38.025303 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:38.474918 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:38.492730 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:38.496663 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:38.497970 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:38.973085 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:38.978997 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:38.979735 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:38.982227 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:39.474799 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:39.481528 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:39.481921 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:39.484317 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:39.972804 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:39.978286 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:39.979517 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:39.981143 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:40.474715 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:40.481427 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:40.488665 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:40.494651 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:40.976036 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:40.980427 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:40.983056 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:40.987033 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:41.476447 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:41.479669 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:41.480016 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:41.483594 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:41.973617 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:41.978089 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:41.982266 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:41.984663 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:42.472736 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:42.487219 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:42.492985 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:42.493947 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:42.973539 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:42.982946 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:42.986923 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:42.987907 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:43.473857 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:43.479414 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:43.481641 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:43.483870 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:43.973517 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:43.982133 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:43.983164 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:43.983422 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:44.473837 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:44.479785 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:44.484353 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:44.488415 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:44.979994 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:44.985076 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:44.986656 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:44.992396 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:45.473966 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:45.481107 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:45.481941 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:45.487107 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:45.988284 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:46.002665 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0811 00:31:46.002780 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:46.007448 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:46.474794 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:46.482191 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:46.485676 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:46.487225 1387850 kapi.go:108] duration metric: took 14.592259009s to wait for kubernetes.io/minikube-addons=registry ...
I0811 00:31:46.974885 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:46.981478 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:46.989618 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:47.487399 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:47.491281 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:47.492579 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:47.975251 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:47.985471 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:47.986668 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:48.490136 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:48.490788 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:48.506411 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:48.973392 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:48.977674 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:48.981056 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:49.474082 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:49.481091 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:49.485354 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:49.973955 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:49.996161 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:49.997622 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:50.475171 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:50.524381 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:50.525120 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:50.974322 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:50.982138 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:50.982860 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:51.474535 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:51.479680 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:51.480476 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:51.973249 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:51.978600 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:51.981854 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:52.473226 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:52.477902 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:52.482802 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:52.973434 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:52.978777 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:52.980477 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:53.473319 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:53.477756 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:53.487033 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:53.973687 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:53.978698 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:53.981280 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:54.475089 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:54.481790 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:54.482384 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:54.985792 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:54.988767 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:54.990896 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:55.473392 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:55.477987 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:55.481815 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:55.973541 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:55.977681 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:55.980675 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:56.474324 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:56.481953 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:56.484326 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:56.974064 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:56.982020 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:56.982410 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:57.473755 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:57.481929 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:57.482893 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:57.973431 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:57.982154 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:57.982719 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:58.473822 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:58.481555 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:58.482077 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:58.973492 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:58.979950 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:58.981978 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:59.473887 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:59.477935 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:59.481649 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:31:59.972952 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:31:59.982204 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:31:59.986360 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:00.473567 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:32:00.480713 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:00.483313 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:00.973469 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:32:00.977297 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:00.980898 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:01.495843 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:32:01.499200 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:01.504461 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:01.973397 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:32:01.977703 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:01.981521 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:02.473108 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:32:02.480024 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:02.480860 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:02.973624 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:32:02.978280 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:02.981042 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:03.473799 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:32:03.480741 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:03.481368 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:03.972770 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:32:03.983282 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:03.984249 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:04.491095 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:04.492784 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:04.492970 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:32:04.974758 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:32:04.984471 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:04.985330 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:05.472850 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:32:05.477703 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:05.482839 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:05.973192 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0811 00:32:05.978328 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:05.980518 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:06.473106 1387850 kapi.go:108] duration metric: took 34.573837309s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0811 00:32:06.475346 1387850 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-20210811003021-1387367 cluster.
I0811 00:32:06.477505 1387850 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0811 00:32:06.479544 1387850 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0811 00:32:06.481981 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:06.487657 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:06.981307 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:06.981745 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:07.485847 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:07.488036 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:07.980464 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:07.982199 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:08.479056 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:08.487132 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:08.983438 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:08.989743 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:09.478278 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:09.482809 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:09.977227 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:09.981531 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:10.479714 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:10.480748 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:10.980089 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0811 00:32:10.980855 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:11.479066 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:11.480977 1387850 kapi.go:108] duration metric: took 39.577106963s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0811 00:32:11.978497 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:12.477434 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:12.977568 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:13.478968 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:13.978087 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:14.478651 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:14.978215 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:15.479057 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:15.978520 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:16.478308 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:16.977654 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:17.478970 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:17.978263 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:18.479065 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:18.978518 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:19.477635 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:19.978219 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:20.482858 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:20.978313 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:21.479093 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:21.977637 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:22.477774 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:22.977780 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:23.478605 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:23.977808 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:24.477511 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:24.977509 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:25.480776 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:25.978310 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:26.478828 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:26.978037 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:27.478852 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:27.978490 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:28.485097 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:28.978352 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:29.482810 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:29.978509 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:30.478065 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:30.978093 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:31.478736 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:31.978713 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:32.478985 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:32.977984 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:33.478992 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:33.978850 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:34.482336 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:34.978545 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:35.478663 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:35.978031 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:36.478559 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:36.979038 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:37.478422 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:37.978166 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:38.478873 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:38.977654 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:39.482663 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:39.977950 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:40.482296 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0811 00:32:40.978478 1387850 kapi.go:108] duration metric: took 1m9.089435631s to wait for app.kubernetes.io/name=ingress-nginx ...
I0811 00:32:40.981216 1387850 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, olm, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
I0811 00:32:40.981241 1387850 addons.go:344] enableAddons completed in 1m19.791344476s
I0811 00:32:41.039327 1387850 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
I0811 00:32:41.041912 1387850 out.go:177] * Done! kubectl is now configured to use "addons-20210811003021-1387367" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Logs begin at Wed 2021-08-11 00:30:27 UTC, end at Wed 2021-08-11 00:35:34 UTC. --
Aug 11 00:32:01 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:01.620796151Z" level=info msg="ignoring event" container=9247acdc5d64f443b68db1cc6df58dd5e5f4feaffb68b6da6ba21b7bd4ab39b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:32:02 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:02.097502833Z" level=info msg="ignoring event" container=e9f14f481ab19b6c6e7291aa61d196c93e741d023142bb277d525f7eafba2af7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:32:02 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:02.253946601Z" level=info msg="ignoring event" container=62ab535b11128e12d593bf48f375e4900a6d8cfe79458696ce2f1101199f7e2e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:32:02 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:02.812654117Z" level=info msg="ignoring event" container=db4d9e368f2580d69f88a161fa33362694084315575f1a0a21aeaf98413c7581 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:32:03 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:03.147865444Z" level=info msg="ignoring event" container=3c1b9bd41a420e396cc6afafa5fda82d26f9322d71f8ea1e6a339e9620d7018f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:32:03 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:03.535036967Z" level=warning msg="reference for unknown type: " digest="sha256:c407ad6ee97d8a0e8a21c713e2d9af66aaf73315e4a123874c00b786f962f3cd" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:c407ad6ee97d8a0e8a21c713e2d9af66aaf73315e4a123874c00b786f962f3cd"
Aug 11 00:32:05 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:05.363269927Z" level=warning msg="reference for unknown type: " digest="sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108" remote="k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108"
Aug 11 00:32:07 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:07.285998850Z" level=warning msg="reference for unknown type: " digest="sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659" remote="k8s.gcr.io/sig-storage/hostpathplugin@sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659"
Aug 11 00:32:09 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:09.310317276Z" level=warning msg="reference for unknown type: " digest="sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994" remote="k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994"
Aug 11 00:32:15 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:15.725559728Z" level=info msg="ignoring event" container=df2492023ba1fcfd3bbc222fde5b8a84637a986c090f7faa5c80fc0697623c10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:32:16 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:16.755442422Z" level=info msg="ignoring event" container=73446dad2ab7fc3a8445c107bec81ade8dd2693ce1b4982bfe8cbe9ccfb1b801 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:32:27 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:27.741112515Z" level=info msg="ignoring event" container=edfbb95e4087c049db7afcb9e1f8ce0508d820fd630038e8da3e4c5529efa58e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:32:33 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:33.357170162Z" level=warning msg="reference for unknown type: " digest="sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a" remote="k8s.gcr.io/ingress-nginx/controller@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a"
Aug 11 00:32:49 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:49.730895493Z" level=info msg="ignoring event" container=56c8e32523e1b87b7f33c103a15d47ff2f319a0571ea580c5aef97731015362a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:32:51 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:51.770304489Z" level=info msg="ignoring event" container=e9eeac65f955c313cbc23a77e5764104ecaf110d3cdeebf1b41e5f20aa8d8d05 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:32:53 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:53.264806699Z" level=info msg="ignoring event" container=5c39d034cf49e0969331d75b84592596fb5aa5c3b12ccc45db3a7e8d7080dea7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:32:53 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:53.993111992Z" level=info msg="ignoring event" container=25973087bf63929228da7b9f1c9d158ea8900ab97176337e251a3438f1d147f5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:33:20 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:33:20.739065582Z" level=info msg="ignoring event" container=f138a65ea0d8768aa9d6fd38db702cebb478c50a5499ad0d1aa321cc28d8aa55 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:33:37 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:33:37.754803103Z" level=info msg="ignoring event" container=5314100ae30c55c2c5fca50b59180db59f7d0052b4c4d71500579f8559290a48 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:33:44 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:33:44.728487709Z" level=info msg="ignoring event" container=38c2a91f09c071cbeb4b3668c2ebd6a57f5b9c1dfdc3b138a383795cc803f634 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:34:49 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:34:49.744487100Z" level=info msg="ignoring event" container=b14e7fada5642b72a77355be3587194eca52b65c2e16b1a9d03d4a29ec8ff73c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:35:01 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:35:01.733119773Z" level=info msg="ignoring event" container=93d05e6a4fdeadc429b3b8680409bcc95a4911306bd7c468e2c22e8baba6c554 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:35:14 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:35:14.725639809Z" level=info msg="ignoring event" container=92a9c313b50af3db83df40b9e3de0bb7efa59a28ae8f1cc94597b359047e2d81 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:35:32 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:35:32.869368899Z" level=info msg="ignoring event" container=716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 00:35:32 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:35:32.945418515Z" level=info msg="ignoring event" container=521d909af58b2ca6b97ebc302833c3719f1cdf051b0e5a8342fb46c49a87d86c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
92a9c313b50af d544402579747 20 seconds ago Exited olm-operator 5 239321d9715a9
93d05e6a4fdea d544402579747 33 seconds ago Exited catalog-operator 5 dc27d55e9b2e5
b14e7fada5642 60dc18151daf8 45 seconds ago Exited registry-proxy 5 ad962c09e2ff7
383628dc34c7f k8s.gcr.io/ingress-nginx/controller@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a 2 minutes ago Running controller 0 87f9e6f6e10e0
f9d910b0983cb k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 3 minutes ago Running liveness-probe 0 8e312aa6ef3e1
86c4bb5905f03 k8s.gcr.io/sig-storage/hostpathplugin@sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659 3 minutes ago Running hostpath 0 8e312aa6ef3e1
e6aa1f5da5206 k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108 3 minutes ago Running node-driver-registrar 0 8e312aa6ef3e1
c54d1e7369442 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:c407ad6ee97d8a0e8a21c713e2d9af66aaf73315e4a123874c00b786f962f3cd 3 minutes ago Running gcp-auth 0 a7b60bbf33b9e
3b180fbf110d5 k8s.gcr.io/sig-storage/csi-external-health-monitor-controller@sha256:14988b598a180cc0282f3f4bc982371baf9a9c9b80878fb385f8ae8bd04ecf16 3 minutes ago Running csi-external-health-monitor-controller 0 8e312aa6ef3e1
62ab535b11128 a883f7fc35610 3 minutes ago Exited patch 1 3c1b9bd41a420
9247acdc5d64f jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 3 minutes ago Exited create 0 e9f14f481ab19
f491b4ee0dd1e k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09 3 minutes ago Running csi-attacher 0 55d0941342052
ee41f2c71eaef k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a 3 minutes ago Running csi-resizer 0 b83b8d797e576
51cfceedd8f38 k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782 3 minutes ago Running csi-snapshotter 0 6c7f1ea7004a5
8792838a02368 k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 3 minutes ago Running csi-provisioner 0 bc4b5104a1fa8
e8bdb4e95a016 k8s.gcr.io/sig-storage/csi-external-health-monitor-agent@sha256:c20d4a4772599e68944452edfcecc944a1df28c19e94b942d526ca25a522ea02 3 minutes ago Running csi-external-health-monitor-agent 0 8e312aa6ef3e1
34d13b67bdbb1 622522dfd285b 3 minutes ago Exited patch 1 59f9d2cea4a05
86a9debc99cc3 jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689 3 minutes ago Exited create 0 68356fc459748
e8470704d7993 k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4 3 minutes ago Running volume-snapshot-controller 0 a188b55eb2300
29e27695dc868 k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4 3 minutes ago Running volume-snapshot-controller 0 9abc8197d88f7
849a40fa43175 k8s.gcr.io/metrics-server/metrics-server@sha256:dbc33d7d35d2a9cc5ab402005aa7a0d13be6192f3550c7d42cba8d2d5e3a5d62 3 minutes ago Running metrics-server 0 f7efddac80035
0d6ae04912a61 ba04bb24b9575 4 minutes ago Running storage-provisioner 0 c19163d31a596
4f14ad2dc9238 1a1f05a2cd7c2 4 minutes ago Running coredns 0 f3126492d7db3
3e17f7de9e8a2 4ea38350a1beb 4 minutes ago Running kube-proxy 0 4393665d45427
178036f64854a cb310ff289d79 4 minutes ago Running kube-controller-manager 0 7e5d403628742
daa4bc492ed71 05b738aa1bc63 4 minutes ago Running etcd 0 5c7734c8acc19
107ea2d3d596b 44a6d50ef170d 4 minutes ago Running kube-apiserver 0 efd4677540c6b
4f7326edc3cff 31a3b96cefc1e 4 minutes ago Running kube-scheduler 0 367240f7e40a9
*
* ==> coredns [4f14ad2dc923] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.0
linux/arm64, go1.15.3, 054c9ae
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
[INFO] Reloading complete
*
* ==> describe nodes <==
* Name: addons-20210811003021-1387367
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=addons-20210811003021-1387367
kubernetes.io/os=linux
minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd
minikube.k8s.io/name=addons-20210811003021-1387367
minikube.k8s.io/updated_at=2021_08_11T00_31_06_0700
minikube.k8s.io/version=v1.22.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-20210811003021-1387367
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-20210811003021-1387367"}
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 11 Aug 2021 00:31:04 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-20210811003021-1387367
AcquireTime: <unset>
RenewTime: Wed, 11 Aug 2021 00:35:25 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 11 Aug 2021 00:33:12 +0000 Wed, 11 Aug 2021 00:30:56 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 11 Aug 2021 00:33:12 +0000 Wed, 11 Aug 2021 00:30:56 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 11 Aug 2021 00:33:12 +0000 Wed, 11 Aug 2021 00:30:56 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 11 Aug 2021 00:33:12 +0000 Wed, 11 Aug 2021 00:31:20 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-20210811003021-1387367
Capacity:
cpu: 2
ephemeral-storage: 60796312Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8033460Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 60796312Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8033460Ki
pods: 110
System Info:
Machine ID: 80c525a0c99c4bf099c0cbf9c365b032
System UUID: 7597b455-7869-476e-86a2-9b994506f601
Boot ID: dff2c102-a0cf-4fb0-a2ea-36617f3a3229
Kernel Version: 5.8.0-1041-aws
OS Image: Ubuntu 20.04.2 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://20.10.7
Kubelet Version: v1.21.3
Kube-Proxy Version: v1.21.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (21 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
gcp-auth gcp-auth-5954cc4898-vdwfq 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m8s
ingress-nginx ingress-nginx-controller-59b45fb494-tt28h 100m (5%!)(MISSING) 0 (0%!)(MISSING) 90Mi (1%!)(MISSING) 0 (0%!)(MISSING) 4m6s
kube-system coredns-558bd4d5db-j4xjh 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (2%!)(MISSING) 4m14s
kube-system csi-hostpath-attacher-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m3s
kube-system csi-hostpath-provisioner-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m3s
kube-system csi-hostpath-resizer-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m3s
kube-system csi-hostpath-snapshotter-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m3s
kube-system csi-hostpathplugin-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m3s
kube-system etcd-addons-20210811003021-1387367 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 4m24s
kube-system kube-apiserver-addons-20210811003021-1387367 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m24s
kube-system kube-controller-manager-addons-20210811003021-1387367 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m24s
kube-system kube-proxy-hbv8p 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m14s
kube-system kube-scheduler-addons-20210811003021-1387367 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m24s
kube-system metrics-server-77c99ccb96-7bz4t 100m (5%!)(MISSING) 0 (0%!)(MISSING) 300Mi (3%!)(MISSING) 0 (0%!)(MISSING) 4m9s
kube-system registry-dzdlw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m9s
kube-system registry-proxy-xfrxz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m9s
kube-system snapshot-controller-989f9ddc8-f8q5j 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m9s
kube-system snapshot-controller-989f9ddc8-pjvmj 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m9s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m11s
olm catalog-operator-75d496484d-lftth 10m (0%!)(MISSING) 0 (0%!)(MISSING) 80Mi (1%!)(MISSING) 0 (0%!)(MISSING) 4m3s
olm olm-operator-859c88c96-zfpv9 10m (0%!)(MISSING) 0 (0%!)(MISSING) 160Mi (2%!)(MISSING) 0 (0%!)(MISSING) 4m3s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 970m (48%!)(MISSING) 0 (0%!)(MISSING)
memory 800Mi (10%!)(MISSING) 170Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-32Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-64Ki 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 4m39s (x5 over 4m40s) kubelet Node addons-20210811003021-1387367 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m39s (x4 over 4m40s) kubelet Node addons-20210811003021-1387367 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m39s (x4 over 4m40s) kubelet Node addons-20210811003021-1387367 status is now: NodeHasSufficientPID
Normal Starting 4m24s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m24s kubelet Node addons-20210811003021-1387367 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m24s kubelet Node addons-20210811003021-1387367 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m24s kubelet Node addons-20210811003021-1387367 status is now: NodeHasSufficientPID
Normal NodeNotReady 4m24s kubelet Node addons-20210811003021-1387367 status is now: NodeNotReady
Normal NodeAllocatableEnforced 4m24s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 4m14s kubelet Node addons-20210811003021-1387367 status is now: NodeReady
Normal Starting 4m13s kube-proxy Starting kube-proxy.
*
* ==> dmesg <==
* [ +0.001104] FS-Cache: O-key=[8] 'c762010000000000'
[ +0.000863] FS-Cache: N-cookie c=000000006895995f [p=000000003cfe13d3 fl=2 nc=0 na=1]
[ +0.001353] FS-Cache: N-cookie d=00000000d0f41ca1 n=0000000007d05ee7
[ +0.001085] FS-Cache: N-key=[8] 'c762010000000000'
[Aug10 23:20] FS-Cache: Duplicate cookie detected
[ +0.000856] FS-Cache: O-cookie c=00000000af756993 [p=000000003cfe13d3 fl=226 nc=0 na=1]
[ +0.001346] FS-Cache: O-cookie d=00000000d0f41ca1 n=000000009356b987
[ +0.001071] FS-Cache: O-key=[8] 'c562010000000000'
[ +0.000838] FS-Cache: N-cookie c=0000000062b369eb [p=000000003cfe13d3 fl=2 nc=0 na=1]
[ +0.001331] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000e0c82591
[ +0.001061] FS-Cache: N-key=[8] 'c562010000000000'
[ +0.001531] FS-Cache: Duplicate cookie detected
[ +0.000801] FS-Cache: O-cookie c=00000000ccb09f62 [p=000000003cfe13d3 fl=226 nc=0 na=1]
[ +0.001326] FS-Cache: O-cookie d=00000000d0f41ca1 n=000000001c672d8a
[ +0.001069] FS-Cache: O-key=[8] 'c762010000000000'
[ +0.001140] FS-Cache: N-cookie c=0000000062b369eb [p=000000003cfe13d3 fl=2 nc=0 na=1]
[ +0.001307] FS-Cache: N-cookie d=00000000d0f41ca1 n=0000000083a2ea2e
[ +0.001068] FS-Cache: N-key=[8] 'c762010000000000'
[ +0.001828] FS-Cache: Duplicate cookie detected
[ +0.000775] FS-Cache: O-cookie c=0000000089195cf5 [p=000000003cfe13d3 fl=226 nc=0 na=1]
[ +0.001346] FS-Cache: O-cookie d=00000000d0f41ca1 n=0000000024759c93
[ +0.001076] FS-Cache: O-key=[8] 'c662010000000000'
[ +0.000853] FS-Cache: N-cookie c=0000000062b369eb [p=000000003cfe13d3 fl=2 nc=0 na=1]
[ +0.001320] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000f79fca59
[ +0.001058] FS-Cache: N-key=[8] 'c662010000000000'
*
* ==> etcd [daa4bc492ed7] <==
* 2021-08-11 00:31:30.568152 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:31:40.565963 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:31:50.570132 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:32:00.566454 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:32:10.566448 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:32:20.566000 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:32:30.566032 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:32:40.566114 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:32:50.566765 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:33:00.566080 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:33:10.566895 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:33:20.566697 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:33:30.565878 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:33:40.566563 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:33:50.566261 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:34:00.566545 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:34:10.566101 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:34:20.566261 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:34:30.566469 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:34:40.566194 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:34:50.565895 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:35:00.566330 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:35:10.566237 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:35:20.566040 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 00:35:30.566446 I | etcdserver/api/etcdhttp: /health OK (status code 200)
*
* ==> kernel <==
* 00:35:34 up 10:18, 0 users, load average: 0.97, 1.96, 2.58
Linux addons-20210811003021-1387367 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"
*
* ==> kube-apiserver [107ea2d3d596] <==
* E0811 00:31:45.992225 1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.62.170:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.62.170:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.62.170:443: connect: connection refused
I0811 00:31:50.563247 1 client.go:360] parsed scheme: "endpoint"
I0811 00:31:50.563292 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
I0811 00:31:50.615736 1 client.go:360] parsed scheme: "endpoint"
I0811 00:31:50.615771 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
I0811 00:31:50.944487 1 client.go:360] parsed scheme: "endpoint"
I0811 00:31:50.944529 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
I0811 00:32:10.570358 1 client.go:360] parsed scheme: "passthrough"
I0811 00:32:10.570402 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0811 00:32:10.570411 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 00:32:54.476101 1 client.go:360] parsed scheme: "passthrough"
I0811 00:32:54.476146 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0811 00:32:54.476155 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 00:33:30.063609 1 client.go:360] parsed scheme: "passthrough"
I0811 00:33:30.063672 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0811 00:33:30.063682 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 00:34:09.055310 1 client.go:360] parsed scheme: "passthrough"
I0811 00:34:09.055354 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0811 00:34:09.055362 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 00:34:47.087863 1 client.go:360] parsed scheme: "passthrough"
I0811 00:34:47.087969 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0811 00:34:47.087995 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 00:35:20.426773 1 client.go:360] parsed scheme: "passthrough"
I0811 00:35:20.426821 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0811 00:35:20.426847 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
*
* ==> kube-controller-manager [178036f64854] <==
* I0811 00:31:31.010688 1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-attacher" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful"
I0811 00:31:31.125642 1 event.go:291] "Event occurred" object="kube-system/csi-hostpathplugin" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful"
I0811 00:31:31.267458 1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-provisioner" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful"
I0811 00:31:31.363396 1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-resizer" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful"
I0811 00:31:31.473562 1 event.go:291] "Event occurred" object="olm/olm-operator" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set olm-operator-859c88c96 to 1"
I0811 00:31:31.559602 1 event.go:291] "Event occurred" object="olm/olm-operator-859c88c96" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: olm-operator-859c88c96-zfpv9"
I0811 00:31:31.620453 1 event.go:291] "Event occurred" object="olm/catalog-operator" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set catalog-operator-75d496484d to 1"
I0811 00:31:31.661627 1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-snapshotter" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful"
I0811 00:31:31.669704 1 event.go:291] "Event occurred" object="olm/catalog-operator-75d496484d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: catalog-operator-75d496484d-lftth"
I0811 00:31:31.898491 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-create-grgf6"
I0811 00:31:32.091158 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-patch-xnff6"
I0811 00:31:47.393791 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0811 00:31:49.635088 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0811 00:31:50.482378 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for subscriptions.operators.coreos.com
I0811 00:31:50.482416 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for volumesnapshots.snapshot.storage.k8s.io
I0811 00:31:50.482448 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com
I0811 00:31:50.482467 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for installplans.operators.coreos.com
I0811 00:31:50.482490 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com
I0811 00:31:50.482519 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for operatorgroups.operators.coreos.com
I0811 00:31:50.482571 1 shared_informer.go:240] Waiting for caches to sync for resource quota
I0811 00:31:50.683331 1 shared_informer.go:247] Caches are synced for resource quota
I0811 00:31:50.915593 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0811 00:31:51.015957 1 shared_informer.go:247] Caches are synced for garbage collector
I0811 00:32:02.036947 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0811 00:32:03.086565 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
*
* ==> kube-proxy [3e17f7de9e8a] <==
* I0811 00:31:21.677244 1 node.go:172] Successfully retrieved node IP: 192.168.49.2
I0811 00:31:21.677330 1 server_others.go:140] Detected node IP 192.168.49.2
W0811 00:31:21.677372 1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
I0811 00:31:21.789613 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I0811 00:31:21.789651 1 server_others.go:212] Using iptables Proxier.
I0811 00:31:21.789661 1 server_others.go:219] creating dualStackProxier for iptables.
W0811 00:31:21.789673 1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I0811 00:31:21.789947 1 server.go:643] Version: v1.21.3
I0811 00:31:21.855080 1 config.go:315] Starting service config controller
I0811 00:31:21.855098 1 shared_informer.go:240] Waiting for caches to sync for service config
I0811 00:31:21.855222 1 config.go:224] Starting endpoint slice config controller
I0811 00:31:21.855227 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
W0811 00:31:21.868516 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 00:31:21.870560 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
I0811 00:31:21.955804 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0811 00:31:21.955864 1 shared_informer.go:247] Caches are synced for service config
*
* ==> kube-scheduler [4f7326edc3cf] <==
* W0811 00:31:03.888717 1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
W0811 00:31:03.888737 1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0811 00:31:04.007158 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0811 00:31:04.010685 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0811 00:31:04.010727 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0811 00:31:04.022524 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0811 00:31:04.023548 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0811 00:31:04.024389 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0811 00:31:04.024468 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0811 00:31:04.024200 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0811 00:31:04.024270 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0811 00:31:04.024328 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0811 00:31:04.024586 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0811 00:31:04.024664 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0811 00:31:04.024722 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0811 00:31:04.024777 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0811 00:31:04.024827 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0811 00:31:04.024886 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0811 00:31:04.025054 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0811 00:31:04.025179 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0811 00:31:04.873178 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0811 00:31:04.947036 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0811 00:31:04.986978 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0811 00:31:05.021088 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0811 00:31:07.122862 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Wed 2021-08-11 00:30:27 UTC, end at Wed 2021-08-11 00:35:34 UTC. --
Aug 11 00:35:11 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:11.700328 2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-lftth_olm(d8dfd948-ff17-4767-8371-4be73646cb5d)\"" pod="olm/catalog-operator-75d496484d-lftth" podUID=d8dfd948-ff17-4767-8371-4be73646cb5d
Aug 11 00:35:11 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:11.942445 2321 scope.go:111] "RemoveContainer" containerID="93d05e6a4fdeadc429b3b8680409bcc95a4911306bd7c468e2c22e8baba6c554"
Aug 11 00:35:11 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:11.942873 2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-lftth_olm(d8dfd948-ff17-4767-8371-4be73646cb5d)\"" pod="olm/catalog-operator-75d496484d-lftth" podUID=d8dfd948-ff17-4767-8371-4be73646cb5d
Aug 11 00:35:12 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:12.569632 2321 scope.go:111] "RemoveContainer" containerID="b14e7fada5642b72a77355be3587194eca52b65c2e16b1a9d03d4a29ec8ff73c"
Aug 11 00:35:12 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:12.569931 2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=registry-proxy pod=registry-proxy-xfrxz_kube-system(19d31762-bc36-413f-8533-e97b57d38a28)\"" pod="kube-system/registry-proxy-xfrxz" podUID=19d31762-bc36-413f-8533-e97b57d38a28
Aug 11 00:35:14 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:14.569913 2321 scope.go:111] "RemoveContainer" containerID="38c2a91f09c071cbeb4b3668c2ebd6a57f5b9c1dfdc3b138a383795cc803f634"
Aug 11 00:35:14 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:14.998077 2321 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for olm/olm-operator-859c88c96-zfpv9 through plugin: invalid network status for"
Aug 11 00:35:15 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:15.003640 2321 scope.go:111] "RemoveContainer" containerID="38c2a91f09c071cbeb4b3668c2ebd6a57f5b9c1dfdc3b138a383795cc803f634"
Aug 11 00:35:15 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:15.004080 2321 scope.go:111] "RemoveContainer" containerID="92a9c313b50af3db83df40b9e3de0bb7efa59a28ae8f1cc94597b359047e2d81"
Aug 11 00:35:15 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:15.005120 2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=olm-operator pod=olm-operator-859c88c96-zfpv9_olm(2866bd0c-37ae-465c-915f-d324574f23f7)\"" pod="olm/olm-operator-859c88c96-zfpv9" podUID=2866bd0c-37ae-465c-915f-d324574f23f7
Aug 11 00:35:16 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:16.018582 2321 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for olm/olm-operator-859c88c96-zfpv9 through plugin: invalid network status for"
Aug 11 00:35:21 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:21.581971 2321 scope.go:111] "RemoveContainer" containerID="92a9c313b50af3db83df40b9e3de0bb7efa59a28ae8f1cc94597b359047e2d81"
Aug 11 00:35:21 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:21.582423 2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=olm-operator pod=olm-operator-859c88c96-zfpv9_olm(2866bd0c-37ae-465c-915f-d324574f23f7)\"" pod="olm/olm-operator-859c88c96-zfpv9" podUID=2866bd0c-37ae-465c-915f-d324574f23f7
Aug 11 00:35:22 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:22.098463 2321 scope.go:111] "RemoveContainer" containerID="92a9c313b50af3db83df40b9e3de0bb7efa59a28ae8f1cc94597b359047e2d81"
Aug 11 00:35:22 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:22.098857 2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=olm-operator pod=olm-operator-859c88c96-zfpv9_olm(2866bd0c-37ae-465c-915f-d324574f23f7)\"" pod="olm/olm-operator-859c88c96-zfpv9" podUID=2866bd0c-37ae-465c-915f-d324574f23f7
Aug 11 00:35:23 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:23.569760 2321 scope.go:111] "RemoveContainer" containerID="93d05e6a4fdeadc429b3b8680409bcc95a4911306bd7c468e2c22e8baba6c554"
Aug 11 00:35:23 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:23.570191 2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-lftth_olm(d8dfd948-ff17-4767-8371-4be73646cb5d)\"" pod="olm/catalog-operator-75d496484d-lftth" podUID=d8dfd948-ff17-4767-8371-4be73646cb5d
Aug 11 00:35:24 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:24.569489 2321 scope.go:111] "RemoveContainer" containerID="b14e7fada5642b72a77355be3587194eca52b65c2e16b1a9d03d4a29ec8ff73c"
Aug 11 00:35:24 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:24.569925 2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=registry-proxy pod=registry-proxy-xfrxz_kube-system(19d31762-bc36-413f-8533-e97b57d38a28)\"" pod="kube-system/registry-proxy-xfrxz" podUID=19d31762-bc36-413f-8533-e97b57d38a28
Aug 11 00:35:33 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:33.249830 2321 scope.go:111] "RemoveContainer" containerID="716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0"
Aug 11 00:35:33 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:33.278212 2321 scope.go:111] "RemoveContainer" containerID="716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0"
Aug 11 00:35:33 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:33.279028 2321 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0" containerID="716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0"
Aug 11 00:35:33 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:33.279076 2321 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0} err="failed to get container status \"716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0\": rpc error: code = Unknown desc = Error: No such container: 716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0"
Aug 11 00:35:34 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:34.574619 2321 kuberuntime_container.go:691] "Kill container failed" err="rpc error: code = Unknown desc = Error: No such container: 716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0" pod="kube-system/registry-dzdlw" podUID=4a872b2d-a2b1-46f9-9afd-c52b6647383f containerName="registry" containerID={Type:docker ID:716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0}
Aug 11 00:35:34 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:34.578429 2321 kubelet_pods.go:1288] "Failed killing the pod" err="failed to \"KillContainer\" for \"registry\" with KillContainerError: \"rpc error: code = Unknown desc = Error: No such container: 716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0\"" podName="registry-dzdlw"
*
* ==> storage-provisioner [0d6ae04912a6] <==
* I0811 00:31:24.716594 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0811 00:31:24.745311 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0811 00:31:24.745363 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0811 00:31:24.768040 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0811 00:31:24.768216 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210811003021-1387367_ae097080-2d56-4c92-b0f7-bfd9c649e5f6!
I0811 00:31:24.771936 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"70e765f0-f18d-4a79-9f04-05826884f687", APIVersion:"v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210811003021-1387367_ae097080-2d56-4c92-b0f7-bfd9c649e5f6 became leader
I0811 00:31:24.968818 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210811003021-1387367_ae097080-2d56-4c92-b0f7-bfd9c649e5f6!
-- /stdout --
helpers_test.go:255: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210811003021-1387367 -n addons-20210811003021-1387367
helpers_test.go:262: (dbg) Run: kubectl --context addons-20210811003021-1387367 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: gcp-auth-certs-create-429nc gcp-auth-certs-patch-7grzk ingress-nginx-admission-create-grgf6 ingress-nginx-admission-patch-xnff6
helpers_test.go:270: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:273: (dbg) Run: kubectl --context addons-20210811003021-1387367 describe pod gcp-auth-certs-create-429nc gcp-auth-certs-patch-7grzk ingress-nginx-admission-create-grgf6 ingress-nginx-admission-patch-xnff6
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context addons-20210811003021-1387367 describe pod gcp-auth-certs-create-429nc gcp-auth-certs-patch-7grzk ingress-nginx-admission-create-grgf6 ingress-nginx-admission-patch-xnff6: exit status 1 (88.553055ms)
** stderr **
Error from server (NotFound): pods "gcp-auth-certs-create-429nc" not found
Error from server (NotFound): pods "gcp-auth-certs-patch-7grzk" not found
Error from server (NotFound): pods "ingress-nginx-admission-create-grgf6" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-xnff6" not found
** /stderr **
helpers_test.go:275: kubectl --context addons-20210811003021-1387367 describe pod gcp-auth-certs-create-429nc gcp-auth-certs-patch-7grzk ingress-nginx-admission-create-grgf6 ingress-nginx-admission-patch-xnff6: exit status 1
--- FAIL: TestAddons/parallel/Registry (174.52s)