=== RUN TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI
=== CONT TestAddons/parallel/CSI
=== CONT TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 15.245151ms
addons_test.go:529: (dbg) Run: kubectl --context addons-20210812233805-198261 create -f testdata/csi-hostpath-driver/pvc.yaml
=== CONT TestAddons/parallel/CSI
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run: kubectl --context addons-20210812233805-198261 get pvc hpvc -o jsonpath={.status.phase} -n default
=== CONT TestAddons/parallel/CSI
addons_test.go:539: (dbg) Run: kubectl --context addons-20210812233805-198261 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [6e933c96-332f-4141-981d-29b0fc19a83d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
=== CONT TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [6e933c96-332f-4141-981d-29b0fc19a83d] Running
=== CONT TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.010327005s
addons_test.go:549: (dbg) Run: kubectl --context addons-20210812233805-198261 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run: kubectl --context addons-20210812233805-198261 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
=== CONT TestAddons/parallel/CSI
helpers_test.go:418: (dbg) Run: kubectl --context addons-20210812233805-198261 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
=== CONT TestAddons/parallel/CSI
addons_test.go:559: (dbg) Run: kubectl --context addons-20210812233805-198261 delete pod task-pv-pod
2021/08/12 23:40:41 [DEBUG] GET http://192.168.49.2:5000
=== CONT TestAddons/parallel/CSI
addons_test.go:559: (dbg) Done: kubectl --context addons-20210812233805-198261 delete pod task-pv-pod: (14.057834501s)
addons_test.go:565: (dbg) Run: kubectl --context addons-20210812233805-198261 delete pvc hpvc
addons_test.go:571: (dbg) Run: kubectl --context addons-20210812233805-198261 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
=== CONT TestAddons/parallel/CSI
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run: kubectl --context addons-20210812233805-198261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run: kubectl --context addons-20210812233805-198261 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [01204c8f-17b1-4895-b733-b2a3f783d95c] Pending
helpers_test.go:343: "task-pv-pod-restore" [01204c8f-17b1-4895-b733-b2a3f783d95c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
=== CONT TestAddons/parallel/CSI
addons_test.go:586: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod-restore" failed to start within 6m0s: timed out waiting for the condition ****
addons_test.go:586: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210812233805-198261 -n addons-20210812233805-198261
addons_test.go:586: TestAddons/parallel/CSI: showing logs for failed pods as of 2021-08-12 23:46:56.246781118 +0000 UTC m=+552.393873893
addons_test.go:586: (dbg) Run: kubectl --context addons-20210812233805-198261 describe po task-pv-pod-restore -n default
addons_test.go:586: (dbg) kubectl --context addons-20210812233805-198261 describe po task-pv-pod-restore -n default:
Name: task-pv-pod-restore
Namespace: default
Priority: 0
Node: addons-20210812233805-198261/192.168.49.2
Start Time: Thu, 12 Aug 2021 23:40:55 +0000
Labels: app=task-pv-pod-restore
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
task-pv-container:
Container ID:
Image: nginx
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: k8s-minikube
GCP_PROJECT: k8s-minikube
GCLOUD_PROJECT: k8s-minikube
GOOGLE_CLOUD_PROJECT: k8s-minikube
CLOUDSDK_CORE_PROJECT: k8s-minikube
Mounts:
/google-app-creds.json from gcp-creds (ro)
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n9bg8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
task-pv-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hpvc-restore
ReadOnly: false
kube-api-access-n9bg8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m1s default-scheduler Successfully assigned default/task-pv-pod-restore to addons-20210812233805-198261
Normal SuccessfulAttachVolume 6m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-884489b9-6b33-4e08-8e9d-b25a55ce9e62"
Warning FailedMount 3m58s kubelet Unable to attach or mount volumes: unmounted volumes=[gcp-creds], unattached volumes=[gcp-creds task-pv-storage kube-api-access-n9bg8]: timed out waiting for the condition
Warning FailedMount 110s (x10 over 6m) kubelet MountVolume.SetUp failed for volume "gcp-creds" : hostPath type check failed: /var/lib/minikube/google_application_credentials.json is not a file
Warning FailedMount 101s kubelet Unable to attach or mount volumes: unmounted volumes=[gcp-creds], unattached volumes=[kube-api-access-n9bg8 gcp-creds task-pv-storage]: timed out waiting for the condition
addons_test.go:586: (dbg) Run: kubectl --context addons-20210812233805-198261 logs task-pv-pod-restore -n default
addons_test.go:586: (dbg) Non-zero exit: kubectl --context addons-20210812233805-198261 logs task-pv-pod-restore -n default: exit status 1 (74.001678ms)
** stderr **
Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod-restore" is waiting to start: ContainerCreating
** /stderr **
addons_test.go:586: kubectl --context addons-20210812233805-198261 logs task-pv-pod-restore -n default: exit status 1
addons_test.go:587: failed waiting for pod task-pv-pod-restore: app=task-pv-pod-restore within 6m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======> post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:232: (dbg) Run: docker inspect addons-20210812233805-198261
helpers_test.go:236: (dbg) docker inspect addons-20210812233805-198261:
-- stdout --
[
{
"Id": "d9ce824ea2fe08ca8f8344ba6a2e5be5d2a826939f0c8d2f89876fbc2e0200e0",
"Created": "2021-08-12T23:38:07.770604343Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 199822,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-08-12T23:38:08.241582291Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
"ResolvConfPath": "/var/lib/docker/containers/d9ce824ea2fe08ca8f8344ba6a2e5be5d2a826939f0c8d2f89876fbc2e0200e0/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/d9ce824ea2fe08ca8f8344ba6a2e5be5d2a826939f0c8d2f89876fbc2e0200e0/hostname",
"HostsPath": "/var/lib/docker/containers/d9ce824ea2fe08ca8f8344ba6a2e5be5d2a826939f0c8d2f89876fbc2e0200e0/hosts",
"LogPath": "/var/lib/docker/containers/d9ce824ea2fe08ca8f8344ba6a2e5be5d2a826939f0c8d2f89876fbc2e0200e0/d9ce824ea2fe08ca8f8344ba6a2e5be5d2a826939f0c8d2f89876fbc2e0200e0-json.log",
"Name": "/addons-20210812233805-198261",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-20210812233805-198261:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-20210812233805-198261",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/f2517d89c0682dbe2f89f8d9da8675f7f4e72ac6731a9abb4b0d0ba748da7216-init/diff:/var/lib/docker/overlay2/b09efe5a0553009063fda72c3ed720886de3e773fefa51d98415d6042dc023b4/diff:/var/lib/docker/overlay2/931f2cd946d25343dc0cc13ce37df8175a73208ae1925e361c7c2111d8000a83/diff:/var/lib/docker/overlay2/681ebc35a2027fbd95ec61e3e2778652853dfcdb2fe425ed8f39d1b73b2a9fc8/diff:/var/lib/docker/overlay2/54d817643afdf3ca7c362b64be25c01a0fcf2a41ed325f50d1efd3b7da8e0918/diff:/var/lib/docker/overlay2/7e072b6ef093455c573197f907fac3361e170f3055ecc71bedf73d93b47f542c/diff:/var/lib/docker/overlay2/c993b143d589268750fcc31cce47f0d5fd34e1f7d3cd554d921ff49b1360110f/diff:/var/lib/docker/overlay2/f82c4f888b0def5d403dedc737ec9a73b5acdf2d70eb17013503e0e2fc96755e/diff:/var/lib/docker/overlay2/2c8a9904c550e51834c92b87ad7448b10bdeda6efcbbf5c7063586a8a664a9b3/diff:/var/lib/docker/overlay2/0af512d7773b4106732b9bd3b0bf714fe3692154a1ad9a7de3ee282812c30a3b/diff:/var/lib/docker/overlay2/08d485
392afcbaed3f38e6a0b754c2f7b1f86da4ffd90b959831c930e2a929cb/diff:/var/lib/docker/overlay2/fbc60637d1527b0fdb0bf7026ddfee1bf79fbeed7bc1f8d9429be0b15d6091ae/diff:/var/lib/docker/overlay2/04a805c6f74f9f05052cd0abd884ef24b5a9f1eacdbb1acdb5c7e874c611a449/diff:/var/lib/docker/overlay2/aed9a791c15de0bfe280a94149c84e10f6521f463d943aef692dd3e33997a7ea/diff:/var/lib/docker/overlay2/12fc2bd562361fa63f410056f863991c3644eb299e52d88f33c000edfb8421c9/diff:/var/lib/docker/overlay2/78764a3667db284ccf61cef63f272079aa5267f046d1ed23e3774ba1cad0a25e/diff:/var/lib/docker/overlay2/368a3aacc03b4a381ce01868fc88c713068b782846b3584f9e073d067967296f/diff:/var/lib/docker/overlay2/5205d6ff3a20dd274de9f9bbaeb587bd35940bfab633f5e63e7e718759390fdf/diff:/var/lib/docker/overlay2/63f1c508a674783b34b49ab8766477e5a11170982d629f684fbdeae9f012d24b/diff:/var/lib/docker/overlay2/5f8dc73aed146998ecf6c48e1c0bf24cbff049c50138dbb6a1f2c96904dae52c/diff:/var/lib/docker/overlay2/1fe62ab0cf95ac3017ddc923b46c6ed8f4da590546a973c645c97ea55769820d/diff:/var/lib/d
ocker/overlay2/62eb0f359dd134a3e046973609bad4d261b711b2c31f2cf9b3d43bb02fd010b8/diff:/var/lib/docker/overlay2/7428fbb9ac789c1500ab17b17b3da763754d9ff67c289db0990987050dfaa9df/diff:/var/lib/docker/overlay2/0c722d1206ed2aa8af375234336a8e06cbb7b910e5ea634189d83bd5c01411d3/diff:/var/lib/docker/overlay2/892a23758686d011e50e6eb3f8d32bfa0b7e6a3689f420285761523ca87dcdb1/diff:/var/lib/docker/overlay2/26986f84888c50de5f3cd46fdaaafbcbc9ecb3add899eb9b4a9be4f0ba068859/diff:/var/lib/docker/overlay2/ee7edc4d81dcdd60a3d41bec13932d1beb2e09021634dfb24bf7f6f44e4a5bf9/diff:/var/lib/docker/overlay2/5936a00b9e3425d601dd1d66d25033341faf4023a1ea8a4bb3b496add62fa74f/diff:/var/lib/docker/overlay2/f600a9a9c29d25be645f9564403f1dd3d0dafd64fa59be6fbbcec03f39702a27/diff:/var/lib/docker/overlay2/fab80172f855332dcff728db79d062f072a293a31956d31f7db7dede0175d520/diff:/var/lib/docker/overlay2/7310e2eb51732bd7ebf6f88d2010c5aacafa6f00f520d4418437894b0d515896/diff:/var/lib/docker/overlay2/87e9bcbdc3f02cf4fe9355440914ca728b5aaebcff49c5e0cbf490b6c3d
cc9ce/diff:/var/lib/docker/overlay2/ddd29f2ebc74d162eea55914e1991f547f99097b76c2c259f3c9e2b4cafab849/diff:/var/lib/docker/overlay2/8018c4784eba668d70ec1fb116bc416abe7d0132b3e9ab09627cb2a3e786cf11/diff:/var/lib/docker/overlay2/c582130e3a8538e84139e972640c052422a95f6ac87c313c52a4dbe536774e29/diff:/var/lib/docker/overlay2/750f1d6c01c2ad01de6ed86119f7fd55b07bc96245b47e38bb4771f74b0c5b08/diff:/var/lib/docker/overlay2/f289fdcd3dbffe14a82e06046343882c9d823476e19f20a765b75532c647985a/diff",
"MergedDir": "/var/lib/docker/overlay2/f2517d89c0682dbe2f89f8d9da8675f7f4e72ac6731a9abb4b0d0ba748da7216/merged",
"UpperDir": "/var/lib/docker/overlay2/f2517d89c0682dbe2f89f8d9da8675f7f4e72ac6731a9abb4b0d0ba748da7216/diff",
"WorkDir": "/var/lib/docker/overlay2/f2517d89c0682dbe2f89f8d9da8675f7f4e72ac6731a9abb4b0d0ba748da7216/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "addons-20210812233805-198261",
"Source": "/var/lib/docker/volumes/addons-20210812233805-198261/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "addons-20210812233805-198261",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-20210812233805-198261",
"name.minikube.sigs.k8s.io": "addons-20210812233805-198261",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "6da899fb4ec4b94754103b95e6ebce502469ad10f36e37f1eb27c3ed28da4812",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32972"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32971"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32968"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32970"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32969"
}
]
},
"SandboxKey": "/var/run/docker/netns/6da899fb4ec4",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-20210812233805-198261": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"d9ce824ea2fe"
],
"NetworkID": "008dbd68167ef1d2a28fce006cee2bc1e8962be0951071376ce990149fb0e6fe",
"EndpointID": "739154bffab5b642cd3e469d30005e01938c347e552cec55a02ffa307d0666de",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210812233805-198261 -n addons-20210812233805-198261
helpers_test.go:245: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======> post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p addons-20210812233805-198261 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p addons-20210812233805-198261 logs -n 25: (1.233694192s)
helpers_test.go:253: TestAddons/parallel/CSI logs:
-- stdout --
*
* ==> Audit <==
* |---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
| delete | --all | download-only-20210812233743-198261 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:37:58 UTC | Thu, 12 Aug 2021 23:37:58 UTC |
| delete | -p | download-only-20210812233743-198261 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:37:58 UTC | Thu, 12 Aug 2021 23:37:59 UTC |
| | download-only-20210812233743-198261 | | | | | |
| delete | -p | download-only-20210812233743-198261 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:37:59 UTC | Thu, 12 Aug 2021 23:37:59 UTC |
| | download-only-20210812233743-198261 | | | | | |
| delete | -p | download-docker-20210812233759-198261 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:38:05 UTC | Thu, 12 Aug 2021 23:38:05 UTC |
| | download-docker-20210812233759-198261 | | | | | |
| start | -p | addons-20210812233805-198261 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:38:05 UTC | Thu, 12 Aug 2021 23:39:56 UTC |
| | addons-20210812233805-198261 | | | | | |
| | --wait=true --memory=4000 | | | | | |
| | --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=olm | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=helm-tiller | | | | | |
| -p | addons-20210812233805-198261 | addons-20210812233805-198261 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:40:09 UTC | Thu, 12 Aug 2021 23:40:19 UTC |
| | addons enable gcp-auth --force | | | | | |
| -p | addons-20210812233805-198261 | addons-20210812233805-198261 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:40:24 UTC | Thu, 12 Aug 2021 23:40:24 UTC |
| | addons disable metrics-server | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| -p | addons-20210812233805-198261 | addons-20210812233805-198261 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:40:33 UTC | Thu, 12 Aug 2021 23:40:33 UTC |
| | ssh curl -s http://127.0.0.1/ | | | | | |
| | -H 'Host: nginx.example.com' | | | | | |
| -p | addons-20210812233805-198261 | addons-20210812233805-198261 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:40:34 UTC | Thu, 12 Aug 2021 23:40:34 UTC |
| | ssh curl -s http://127.0.0.1/ | | | | | |
| | -H 'Host: nginx.example.com' | | | | | |
| -p | addons-20210812233805-198261 | addons-20210812233805-198261 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:40:41 UTC | Thu, 12 Aug 2021 23:40:41 UTC |
| | ip | | | | | |
| -p | addons-20210812233805-198261 | addons-20210812233805-198261 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:40:41 UTC | Thu, 12 Aug 2021 23:40:41 UTC |
| | addons disable registry | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| -p | addons-20210812233805-198261 | addons-20210812233805-198261 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:40:52 UTC | Thu, 12 Aug 2021 23:40:53 UTC |
| | addons disable helm-tiller | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| -p | addons-20210812233805-198261 | addons-20210812233805-198261 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:40:55 UTC | Thu, 12 Aug 2021 23:41:01 UTC |
| | addons disable gcp-auth | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| -p | addons-20210812233805-198261 | addons-20210812233805-198261 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:40:34 UTC | Thu, 12 Aug 2021 23:41:03 UTC |
| | addons disable ingress | | | | | |
| | --alsologtostderr -v=1 | | | | | |
|---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2021/08/12 23:38:05
Running on machine: debian-jenkins-agent-4
Binary: Built with gc go1.16.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0812 23:38:05.958936 199175 out.go:298] Setting OutFile to fd 1 ...
I0812 23:38:05.959024 199175 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0812 23:38:05.959028 199175 out.go:311] Setting ErrFile to fd 2...
I0812 23:38:05.959031 199175 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0812 23:38:05.959139 199175 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
I0812 23:38:05.959435 199175 out.go:305] Setting JSON to false
I0812 23:38:05.994997 199175 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-4","uptime":12048,"bootTime":1628799438,"procs":155,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
I0812 23:38:05.995105 199175 start.go:121] virtualization: kvm guest
I0812 23:38:05.997514 199175 out.go:177] * [addons-20210812233805-198261] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
I0812 23:38:05.998923 199175 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
I0812 23:38:05.997719 199175 notify.go:169] Checking for updates...
I0812 23:38:06.000416 199175 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0812 23:38:06.001699 199175 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
I0812 23:38:06.003143 199175 out.go:177] - MINIKUBE_LOCATION=12230
I0812 23:38:06.003457 199175 driver.go:335] Setting default libvirt URI to qemu:///system
I0812 23:38:06.053721 199175 docker.go:132] docker version: linux-19.03.15
I0812 23:38:06.053826 199175 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0812 23:38:06.137790 199175 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:201 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-12 23:38:06.090888989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0812 23:38:06.137883 199175 docker.go:244] overlay module found
I0812 23:38:06.139979 199175 out.go:177] * Using the docker driver based on user configuration
I0812 23:38:06.140008 199175 start.go:278] selected driver: docker
I0812 23:38:06.140015 199175 start.go:751] validating driver "docker" against <nil>
I0812 23:38:06.140033 199175 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
W0812 23:38:06.140078 199175 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0812 23:38:06.140098 199175 out.go:242] ! Your cgroup does not allow setting memory.
I0812 23:38:06.141418 199175 out.go:177] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0812 23:38:06.142297 199175 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0812 23:38:06.223350 199175 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:201 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-12 23:38:06.176568488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0812 23:38:06.223449 199175 start_flags.go:263] no existing cluster config was found, will generate one from the flags
I0812 23:38:06.223574 199175 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0812 23:38:06.223595 199175 cni.go:93] Creating CNI manager for ""
I0812 23:38:06.223601 199175 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0812 23:38:06.223606 199175 start_flags.go:277] config:
{Name:addons-20210812233805-198261 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210812233805-198261 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0812 23:38:06.225616 199175 out.go:177] * Starting control plane node addons-20210812233805-198261 in cluster addons-20210812233805-198261
I0812 23:38:06.225677 199175 cache.go:117] Beginning downloading kic base image for docker with docker
I0812 23:38:06.227248 199175 out.go:177] * Pulling base image ...
I0812 23:38:06.227287 199175 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
I0812 23:38:06.227327 199175 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4
I0812 23:38:06.227340 199175 cache.go:56] Caching tarball of preloaded images
I0812 23:38:06.227378 199175 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
I0812 23:38:06.227529 199175 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0812 23:38:06.227545 199175 cache.go:59] Finished verifying existence of preloaded tar for v1.21.3 on docker
I0812 23:38:06.227830 199175 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/config.json ...
I0812 23:38:06.227860 199175 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/config.json: {Name:mk906c822294a6f0493e8ed7f8131fc7921bab61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0812 23:38:06.311992 199175 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
I0812 23:38:06.312023 199175 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
I0812 23:38:06.312041 199175 cache.go:205] Successfully downloaded all kic artifacts
I0812 23:38:06.312089 199175 start.go:313] acquiring machines lock for addons-20210812233805-198261: {Name:mkde208ee6aa0b2e9932c0a4bb65df3a3dec3a04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0812 23:38:06.312249 199175 start.go:317] acquired machines lock for "addons-20210812233805-198261" in 138.567µs
I0812 23:38:06.312281 199175 start.go:89] Provisioning new machine with config: &{Name:addons-20210812233805-198261 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210812233805-198261 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
I0812 23:38:06.312411 199175 start.go:126] createHost starting for "" (driver="docker")
I0812 23:38:06.314554 199175 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0812 23:38:06.314773 199175 start.go:160] libmachine.API.Create for "addons-20210812233805-198261" (driver="docker")
I0812 23:38:06.314811 199175 client.go:168] LocalClient.Create starting
I0812 23:38:06.314929 199175 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
I0812 23:38:06.482226 199175 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
I0812 23:38:06.669300 199175 cli_runner.go:115] Run: docker network inspect addons-20210812233805-198261 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0812 23:38:06.705202 199175 cli_runner.go:162] docker network inspect addons-20210812233805-198261 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0812 23:38:06.705649 199175 network_create.go:255] running [docker network inspect addons-20210812233805-198261] to gather additional debugging logs...
I0812 23:38:06.705682 199175 cli_runner.go:115] Run: docker network inspect addons-20210812233805-198261
W0812 23:38:06.741355 199175 cli_runner.go:162] docker network inspect addons-20210812233805-198261 returned with exit code 1
I0812 23:38:06.741387 199175 network_create.go:258] error running [docker network inspect addons-20210812233805-198261]: docker network inspect addons-20210812233805-198261: exit status 1
stdout:
[]
stderr:
Error: No such network: addons-20210812233805-198261
I0812 23:38:06.741403 199175 network_create.go:260] output of [docker network inspect addons-20210812233805-198261]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: addons-20210812233805-198261
** /stderr **
I0812 23:38:06.741464 199175 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0812 23:38:06.778223 199175 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00059a140] misses:0}
I0812 23:38:06.778271 199175 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0812 23:38:06.778286 199175 network_create.go:106] attempt to create docker network addons-20210812233805-198261 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0812 23:38:06.778331 199175 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210812233805-198261
I0812 23:38:06.848437 199175 network_create.go:90] docker network addons-20210812233805-198261 192.168.49.0/24 created
I0812 23:38:06.848474 199175 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210812233805-198261" container
I0812 23:38:06.848592 199175 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0812 23:38:06.885348 199175 cli_runner.go:115] Run: docker volume create addons-20210812233805-198261 --label name.minikube.sigs.k8s.io=addons-20210812233805-198261 --label created_by.minikube.sigs.k8s.io=true
I0812 23:38:06.922092 199175 oci.go:102] Successfully created a docker volume addons-20210812233805-198261
I0812 23:38:06.922174 199175 cli_runner.go:115] Run: docker run --rm --name addons-20210812233805-198261-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210812233805-198261 --entrypoint /usr/bin/test -v addons-20210812233805-198261:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
I0812 23:38:07.649900 199175 oci.go:106] Successfully prepared a docker volume addons-20210812233805-198261
W0812 23:38:07.649949 199175 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0812 23:38:07.649958 199175 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0812 23:38:07.650016 199175 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0812 23:38:07.650049 199175 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
I0812 23:38:07.650083 199175 kic.go:179] Starting extracting preloaded images to volume ...
I0812 23:38:07.650156 199175 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210812233805-198261:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
I0812 23:38:07.728090 199175 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210812233805-198261 --name addons-20210812233805-198261 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210812233805-198261 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210812233805-198261 --network addons-20210812233805-198261 --ip 192.168.49.2 --volume addons-20210812233805-198261:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
I0812 23:38:08.249782 199175 cli_runner.go:115] Run: docker container inspect addons-20210812233805-198261 --format={{.State.Running}}
I0812 23:38:08.292784 199175 cli_runner.go:115] Run: docker container inspect addons-20210812233805-198261 --format={{.State.Status}}
I0812 23:38:08.339168 199175 cli_runner.go:115] Run: docker exec addons-20210812233805-198261 stat /var/lib/dpkg/alternatives/iptables
I0812 23:38:08.475792 199175 oci.go:278] the created container "addons-20210812233805-198261" has a running status.
I0812 23:38:08.475830 199175 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812233805-198261/id_rsa...
I0812 23:38:08.633619 199175 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812233805-198261/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0812 23:38:09.017109 199175 cli_runner.go:115] Run: docker container inspect addons-20210812233805-198261 --format={{.State.Status}}
I0812 23:38:09.057551 199175 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0812 23:38:09.057576 199175 kic_runner.go:115] Args: [docker exec --privileged addons-20210812233805-198261 chown docker:docker /home/docker/.ssh/authorized_keys]
I0812 23:38:12.005806 199175 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210812233805-198261:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.355607629s)
I0812 23:38:12.005839 199175 kic.go:188] duration metric: took 4.355753 seconds to extract preloaded images to volume
I0812 23:38:12.005906 199175 cli_runner.go:115] Run: docker container inspect addons-20210812233805-198261 --format={{.State.Status}}
I0812 23:38:12.041488 199175 machine.go:88] provisioning docker machine ...
I0812 23:38:12.041527 199175 ubuntu.go:169] provisioning hostname "addons-20210812233805-198261"
I0812 23:38:12.041580 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:12.077945 199175 main.go:130] libmachine: Using SSH client type: native
I0812 23:38:12.078147 199175 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil> [] 0s} 127.0.0.1 32972 <nil> <nil>}
I0812 23:38:12.078165 199175 main.go:130] libmachine: About to run SSH command:
sudo hostname addons-20210812233805-198261 && echo "addons-20210812233805-198261" | sudo tee /etc/hostname
I0812 23:38:12.222749 199175 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210812233805-198261
I0812 23:38:12.222833 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:12.260077 199175 main.go:130] libmachine: Using SSH client type: native
I0812 23:38:12.260223 199175 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil> [] 0s} 127.0.0.1 32972 <nil> <nil>}
I0812 23:38:12.260242 199175 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-20210812233805-198261' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210812233805-198261/g' /etc/hosts;
else
echo '127.0.1.1 addons-20210812233805-198261' | sudo tee -a /etc/hosts;
fi
fi
I0812 23:38:12.370762 199175 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0812 23:38:12.370799 199175 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
I0812 23:38:12.370821 199175 ubuntu.go:177] setting up certificates
I0812 23:38:12.370829 199175 provision.go:83] configureAuth start
I0812 23:38:12.370873 199175 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210812233805-198261
I0812 23:38:12.407585 199175 provision.go:137] copyHostCerts
I0812 23:38:12.407670 199175 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
I0812 23:38:12.407767 199175 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
I0812 23:38:12.407818 199175 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1675 bytes)
I0812 23:38:12.407864 199175 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.addons-20210812233805-198261 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210812233805-198261]
I0812 23:38:12.534125 199175 provision.go:171] copyRemoteCerts
I0812 23:38:12.534184 199175 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0812 23:38:12.534219 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:12.570353 199175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812233805-198261/id_rsa Username:docker}
I0812 23:38:12.654297 199175 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0812 23:38:12.670746 199175 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0812 23:38:12.686486 199175 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0812 23:38:12.702277 199175 provision.go:86] duration metric: configureAuth took 331.434795ms
I0812 23:38:12.702305 199175 ubuntu.go:193] setting minikube options for container-runtime
I0812 23:38:12.702478 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:12.739222 199175 main.go:130] libmachine: Using SSH client type: native
I0812 23:38:12.739406 199175 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil> [] 0s} 127.0.0.1 32972 <nil> <nil>}
I0812 23:38:12.739424 199175 main.go:130] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0812 23:38:12.850968 199175 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
I0812 23:38:12.850995 199175 ubuntu.go:71] root file system type: overlay
I0812 23:38:12.851287 199175 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
I0812 23:38:12.851357 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:12.888296 199175 main.go:130] libmachine: Using SSH client type: native
I0812 23:38:12.888447 199175 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil> [] 0s} 127.0.0.1 32972 <nil> <nil>}
I0812 23:38:12.888512 199175 main.go:130] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0812 23:38:13.007280 199175 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0812 23:38:13.007366 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:13.044249 199175 main.go:130] libmachine: Using SSH client type: native
I0812 23:38:13.044397 199175 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil> [] 0s} 127.0.0.1 32972 <nil> <nil>}
I0812 23:38:13.044415 199175 main.go:130] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0812 23:38:13.647289 199175 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2021-06-02 11:54:50.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2021-08-12 23:38:13.002481829 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0812 23:38:13.647328 199175 machine.go:91] provisioned docker machine in 1.605818262s
I0812 23:38:13.647339 199175 client.go:171] LocalClient.Create took 7.332518582s
I0812 23:38:13.647355 199175 start.go:168] duration metric: libmachine.API.Create for "addons-20210812233805-198261" took 7.332582781s
I0812 23:38:13.647368 199175 start.go:267] post-start starting for "addons-20210812233805-198261" (driver="docker")
I0812 23:38:13.647372 199175 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0812 23:38:13.647422 199175 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0812 23:38:13.647458 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:13.682653 199175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812233805-198261/id_rsa Username:docker}
I0812 23:38:13.766563 199175 ssh_runner.go:149] Run: cat /etc/os-release
I0812 23:38:13.769271 199175 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0812 23:38:13.769290 199175 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0812 23:38:13.769298 199175 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0812 23:38:13.769305 199175 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0812 23:38:13.769313 199175 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
I0812 23:38:13.769358 199175 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
I0812 23:38:13.769380 199175 start.go:270] post-start completed in 122.007538ms
I0812 23:38:13.769638 199175 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210812233805-198261
I0812 23:38:13.804937 199175 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/config.json ...
I0812 23:38:13.805152 199175 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0812 23:38:13.805193 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:13.840327 199175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812233805-198261/id_rsa Username:docker}
I0812 23:38:13.919825 199175 start.go:129] duration metric: createHost completed in 7.607395774s
I0812 23:38:13.919851 199175 start.go:80] releasing machines lock for "addons-20210812233805-198261", held for 7.607587307s
I0812 23:38:13.919934 199175 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210812233805-198261
I0812 23:38:13.954756 199175 ssh_runner.go:149] Run: systemctl --version
I0812 23:38:13.954806 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:13.954818 199175 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0812 23:38:13.954871 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:13.991207 199175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812233805-198261/id_rsa Username:docker}
I0812 23:38:13.996625 199175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812233805-198261/id_rsa Username:docker}
I0812 23:38:14.070946 199175 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0812 23:38:14.106526 199175 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0812 23:38:14.115751 199175 cruntime.go:249] skipping containerd shutdown because we are bound to it
I0812 23:38:14.115822 199175 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0812 23:38:14.124960 199175 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0812 23:38:14.136967 199175 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
I0812 23:38:14.196317 199175 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
I0812 23:38:14.253213 199175 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0812 23:38:14.262428 199175 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0812 23:38:14.317942 199175 ssh_runner.go:149] Run: sudo systemctl start docker
I0812 23:38:14.326687 199175 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0812 23:38:14.373730 199175 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0812 23:38:14.421899 199175 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
I0812 23:38:14.421986 199175 cli_runner.go:115] Run: docker network inspect addons-20210812233805-198261 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0812 23:38:14.457350 199175 ssh_runner.go:149] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0812 23:38:14.460761 199175 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0812 23:38:14.470049 199175 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
I0812 23:38:14.470108 199175 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0812 23:38:14.507740 199175 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.4.1
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/coredns/coredns:v1.8.0
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/metrics-scraper:v1.0.4
-- /stdout --
I0812 23:38:14.507769 199175 docker.go:466] Images already preloaded, skipping extraction
I0812 23:38:14.507813 199175 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0812 23:38:14.545706 199175 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.4.1
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/coredns/coredns:v1.8.0
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/metrics-scraper:v1.0.4
-- /stdout --
I0812 23:38:14.545731 199175 cache_images.go:74] Images are preloaded, skipping loading
I0812 23:38:14.545775 199175 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
I0812 23:38:14.627747 199175 cni.go:93] Creating CNI manager for ""
I0812 23:38:14.627773 199175 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0812 23:38:14.627787 199175 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0812 23:38:14.627804 199175 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210812233805-198261 NodeName:addons-20210812233805-198261 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0812 23:38:14.627992 199175 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "addons-20210812233805-198261"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.21.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0812 23:38:14.628092 199175 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=addons-20210812233805-198261 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.21.3 ClusterName:addons-20210812233805-198261 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0812 23:38:14.628153 199175 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
I0812 23:38:14.635075 199175 binaries.go:44] Found k8s binaries, skipping transfer
I0812 23:38:14.635133 199175 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0812 23:38:14.641773 199175 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0812 23:38:14.653783 199175 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0812 23:38:14.665808 199175 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
I0812 23:38:14.677942 199175 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0812 23:38:14.680924 199175 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0812 23:38:14.689948 199175 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261 for IP: 192.168.49.2
I0812 23:38:14.690005 199175 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
I0812 23:38:15.033394 199175 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt ...
I0812 23:38:15.033435 199175 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt: {Name:mk0b75173bad9da70abe41aaedc104a2e5cfb5f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0812 23:38:15.033655 199175 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key ...
I0812 23:38:15.033672 199175 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key: {Name:mk65cddf1bd3524e99288f1a294e2616bf216872 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0812 23:38:15.033885 199175 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
I0812 23:38:15.108849 199175 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt ...
I0812 23:38:15.108885 199175 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt: {Name:mkb6e1b0354afef9d058509d3eb05c2202eff14a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0812 23:38:15.109082 199175 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key ...
I0812 23:38:15.109096 199175 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key: {Name:mkfe4a443d0ad3849e2295fff42abc591a91cc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0812 23:38:15.109209 199175 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/client.key
I0812 23:38:15.109231 199175 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/client.crt with IP's: []
I0812 23:38:15.272379 199175 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/client.crt ...
I0812 23:38:15.272412 199175 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/client.crt: {Name:mk3c288412062bb1b49ece16b8bd397371a2856d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0812 23:38:15.272618 199175 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/client.key ...
I0812 23:38:15.272632 199175 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/client.key: {Name:mk10ae8ec3ba31d49358bdbe8c8ac4d1e32b43a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0812 23:38:15.272716 199175 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/apiserver.key.dd3b5fb2
I0812 23:38:15.272726 199175 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0812 23:38:15.350303 199175 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/apiserver.crt.dd3b5fb2 ...
I0812 23:38:15.350337 199175 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/apiserver.crt.dd3b5fb2: {Name:mk0d4763bcd708266d0e2c41fb8bec9f922a0fda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0812 23:38:15.350541 199175 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/apiserver.key.dd3b5fb2 ...
I0812 23:38:15.350554 199175 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/apiserver.key.dd3b5fb2: {Name:mk983fb0479f0f9ed8361c86ab86f604b24f0300 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0812 23:38:15.350633 199175 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/apiserver.crt
I0812 23:38:15.350689 199175 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/apiserver.key
I0812 23:38:15.350734 199175 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/proxy-client.key
I0812 23:38:15.350745 199175 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/proxy-client.crt with IP's: []
I0812 23:38:15.453168 199175 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/proxy-client.crt ...
I0812 23:38:15.453204 199175 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/proxy-client.crt: {Name:mkdab2afdd67295fd290d3b6aa8086dc0ee68ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0812 23:38:15.453395 199175 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/proxy-client.key ...
I0812 23:38:15.453414 199175 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/proxy-client.key: {Name:mk732b818b4e85345361783604ea43feafbd28de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0812 23:38:15.453585 199175 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1679 bytes)
I0812 23:38:15.453625 199175 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
I0812 23:38:15.453650 199175 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
I0812 23:38:15.453678 199175 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1675 bytes)
I0812 23:38:15.454668 199175 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0812 23:38:15.540464 199175 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0812 23:38:15.558110 199175 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0812 23:38:15.574383 199175 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812233805-198261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0812 23:38:15.590650 199175 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0812 23:38:15.606940 199175 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0812 23:38:15.625171 199175 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0812 23:38:15.641762 199175 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0812 23:38:15.658193 199175 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0812 23:38:15.674356 199175 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0812 23:38:15.686437 199175 ssh_runner.go:149] Run: openssl version
I0812 23:38:15.691158 199175 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0812 23:38:15.698617 199175 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0812 23:38:15.701628 199175 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 12 23:38 /usr/share/ca-certificates/minikubeCA.pem
I0812 23:38:15.701678 199175 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0812 23:38:15.706451 199175 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0812 23:38:15.713814 199175 kubeadm.go:390] StartCluster: {Name:addons-20210812233805-198261 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210812233805-198261 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0812 23:38:15.713920 199175 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0812 23:38:15.750169 199175 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0812 23:38:15.757465 199175 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0812 23:38:15.764681 199175 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0812 23:38:15.764733 199175 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0812 23:38:15.771396 199175 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0812 23:38:15.771436 199175 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0812 23:38:34.778044 199175 out.go:204] - Generating certificates and keys ...
I0812 23:38:34.781137 199175 out.go:204] - Booting up control plane ...
I0812 23:38:34.783589 199175 out.go:204] - Configuring RBAC rules ...
I0812 23:38:34.785367 199175 cni.go:93] Creating CNI manager for ""
I0812 23:38:34.785381 199175 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0812 23:38:34.785405 199175 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0812 23:38:34.785463 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:34.785467 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19 minikube.k8s.io/name=addons-20210812233805-198261 minikube.k8s.io/updated_at=2021_08_12T23_38_34_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:35.092046 199175 ops.go:34] apiserver oom_adj: -16
I0812 23:38:35.092144 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:35.681583 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:36.181783 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:36.681780 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:37.181038 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:37.681211 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:38.181740 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:38.681077 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:39.181603 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:39.681796 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:40.181890 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:40.681850 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:41.181153 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:41.681079 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:42.181658 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:42.681443 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:43.181880 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:44.181466 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:44.681895 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:45.181930 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:45.681651 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:46.181904 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:46.681393 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:47.182035 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:47.681009 199175 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0812 23:38:47.756542 199175 kubeadm.go:985] duration metric: took 12.971126289s to wait for elevateKubeSystemPrivileges.
I0812 23:38:47.756585 199175 kubeadm.go:392] StartCluster complete in 32.04277878s
I0812 23:38:47.756609 199175 settings.go:142] acquiring lock: {Name:mked6b7bc1b837b8a4b906e57bd78a0cb4d3be56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0812 23:38:47.756754 199175 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
I0812 23:38:47.757279 199175 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mk8743591d73a8efb1d35108616e0753f813b6b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0812 23:38:48.273088 199175 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210812233805-198261" rescaled to 1
I0812 23:38:48.273154 199175 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
I0812 23:38:48.275541 199175 out.go:177] * Verifying Kubernetes components...
I0812 23:38:48.273210 199175 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0812 23:38:48.275606 199175 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0812 23:38:48.273227 199175 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress helm-tiller]
I0812 23:38:48.275690 199175 addons.go:59] Setting volumesnapshots=true in profile "addons-20210812233805-198261"
I0812 23:38:48.275709 199175 addons.go:135] Setting addon volumesnapshots=true in "addons-20210812233805-198261"
I0812 23:38:48.275718 199175 addons.go:59] Setting ingress=true in profile "addons-20210812233805-198261"
I0812 23:38:48.275730 199175 addons.go:135] Setting addon ingress=true in "addons-20210812233805-198261"
I0812 23:38:48.275739 199175 host.go:66] Checking if "addons-20210812233805-198261" exists ...
I0812 23:38:48.275751 199175 host.go:66] Checking if "addons-20210812233805-198261" exists ...
I0812 23:38:48.275782 199175 addons.go:59] Setting default-storageclass=true in profile "addons-20210812233805-198261"
I0812 23:38:48.275804 199175 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210812233805-198261"
I0812 23:38:48.275814 199175 addons.go:59] Setting olm=true in profile "addons-20210812233805-198261"
I0812 23:38:48.275828 199175 addons.go:59] Setting helm-tiller=true in profile "addons-20210812233805-198261"
I0812 23:38:48.275843 199175 addons.go:135] Setting addon olm=true in "addons-20210812233805-198261"
I0812 23:38:48.275842 199175 addons.go:59] Setting metrics-server=true in profile "addons-20210812233805-198261"
I0812 23:38:48.275861 199175 addons.go:59] Setting storage-provisioner=true in profile "addons-20210812233805-198261"
I0812 23:38:48.275855 199175 addons.go:59] Setting registry=true in profile "addons-20210812233805-198261"
I0812 23:38:48.275872 199175 addons.go:135] Setting addon storage-provisioner=true in "addons-20210812233805-198261"
I0812 23:38:48.275875 199175 addons.go:135] Setting addon registry=true in "addons-20210812233805-198261"
W0812 23:38:48.275880 199175 addons.go:147] addon storage-provisioner should already be in state true
I0812 23:38:48.275882 199175 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210812233805-198261"
I0812 23:38:48.275888 199175 host.go:66] Checking if "addons-20210812233805-198261" exists ...
I0812 23:38:48.275902 199175 host.go:66] Checking if "addons-20210812233805-198261" exists ...
I0812 23:38:48.275905 199175 host.go:66] Checking if "addons-20210812233805-198261" exists ...
I0812 23:38:48.275927 199175 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210812233805-198261"
I0812 23:38:48.275961 199175 host.go:66] Checking if "addons-20210812233805-198261" exists ...
I0812 23:38:48.276124 199175 cli_runner.go:115] Run: docker container inspect addons-20210812233805-198261 --format={{.State.Status}}
I0812 23:38:48.276301 199175 cli_runner.go:115] Run: docker container inspect addons-20210812233805-198261 --format={{.State.Status}}
I0812 23:38:48.276381 199175 cli_runner.go:115] Run: docker container inspect addons-20210812233805-198261 --format={{.State.Status}}
I0812 23:38:48.276390 199175 cli_runner.go:115] Run: docker container inspect addons-20210812233805-198261 --format={{.State.Status}}
I0812 23:38:48.275864 199175 addons.go:135] Setting addon metrics-server=true in "addons-20210812233805-198261"
I0812 23:38:48.276423 199175 host.go:66] Checking if "addons-20210812233805-198261" exists ...
I0812 23:38:48.276506 199175 cli_runner.go:115] Run: docker container inspect addons-20210812233805-198261 --format={{.State.Status}}
I0812 23:38:48.276391 199175 cli_runner.go:115] Run: docker container inspect addons-20210812233805-198261 --format={{.State.Status}}
I0812 23:38:48.275848 199175 addons.go:135] Setting addon helm-tiller=true in "addons-20210812233805-198261"
I0812 23:38:48.276595 199175 host.go:66] Checking if "addons-20210812233805-198261" exists ...
I0812 23:38:48.276848 199175 cli_runner.go:115] Run: docker container inspect addons-20210812233805-198261 --format={{.State.Status}}
I0812 23:38:48.277053 199175 cli_runner.go:115] Run: docker container inspect addons-20210812233805-198261 --format={{.State.Status}}
I0812 23:38:48.277059 199175 cli_runner.go:115] Run: docker container inspect addons-20210812233805-198261 --format={{.State.Status}}
I0812 23:38:48.300587 199175 node_ready.go:35] waiting up to 6m0s for node "addons-20210812233805-198261" to be "Ready" ...
I0812 23:38:48.305439 199175 node_ready.go:49] node "addons-20210812233805-198261" has status "Ready":"True"
I0812 23:38:48.305461 199175 node_ready.go:38] duration metric: took 4.833297ms waiting for node "addons-20210812233805-198261" to be "Ready" ...
I0812 23:38:48.305473 199175 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0812 23:38:48.320334 199175 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-6chdq" in "kube-system" namespace to be "Ready" ...
I0812 23:38:48.359782 199175 out.go:177] - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
I0812 23:38:48.362264 199175 out.go:177] - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
I0812 23:38:48.364084 199175 out.go:177] - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
I0812 23:38:48.364147 199175 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
I0812 23:38:48.364161 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
I0812 23:38:48.364217 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:48.366005 199175 out.go:177] - Using image gcr.io/google_containers/kube-registry-proxy:0.4
I0812 23:38:48.367454 199175 out.go:177] - Using image registry:2.7.1
I0812 23:38:48.367564 199175 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
I0812 23:38:48.367575 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
I0812 23:38:48.368885 199175 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
I0812 23:38:48.375621 199175 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
I0812 23:38:48.368063 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:48.377222 199175 out.go:177] - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
I0812 23:38:48.378917 199175 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
I0812 23:38:48.380438 199175 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
I0812 23:38:48.382448 199175 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
I0812 23:38:48.383925 199175 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
I0812 23:38:48.385456 199175 out.go:177] - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
I0812 23:38:48.387807 199175 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
I0812 23:38:48.387981 199175 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0812 23:38:48.387994 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0812 23:38:48.388048 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:48.388409 199175 addons.go:135] Setting addon default-storageclass=true in "addons-20210812233805-198261"
W0812 23:38:48.388430 199175 addons.go:147] addon default-storageclass should already be in state true
I0812 23:38:48.388461 199175 host.go:66] Checking if "addons-20210812233805-198261" exists ...
I0812 23:38:48.388986 199175 cli_runner.go:115] Run: docker container inspect addons-20210812233805-198261 --format={{.State.Status}}
I0812 23:38:48.392435 199175 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0812 23:38:48.392528 199175 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0812 23:38:48.392540 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0812 23:38:48.392589 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:48.393966 199175 out.go:177] - Using image gcr.io/kubernetes-helm/tiller:v2.16.12
I0812 23:38:48.394072 199175 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
I0812 23:38:48.394089 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2433 bytes)
I0812 23:38:48.394143 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:48.396463 199175 out.go:177] - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
I0812 23:38:48.396521 199175 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0812 23:38:48.396530 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
I0812 23:38:48.396571 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:48.401015 199175 out.go:177] - Using image quay.io/operator-framework/olm:v0.17.0
I0812 23:38:48.402523 199175 out.go:177] - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
I0812 23:38:48.433646 199175 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0812 23:38:48.433820 199175 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
I0812 23:38:48.433985 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
I0812 23:38:48.434060 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:48.441958 199175 out.go:177] - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
I0812 23:38:48.442039 199175 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0812 23:38:48.442052 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0812 23:38:48.442107 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:48.445117 199175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812233805-198261/id_rsa Username:docker}
I0812 23:38:48.475462 199175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812233805-198261/id_rsa Username:docker}
I0812 23:38:48.492705 199175 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
I0812 23:38:48.492729 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0812 23:38:48.492785 199175 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812233805-198261
I0812 23:38:48.498724 199175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812233805-198261/id_rsa Username:docker}
I0812 23:38:48.515505 199175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812233805-198261/id_rsa Username:docker}
I0812 23:38:48.518086 199175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812233805-198261/id_rsa Username:docker}
I0812 23:38:48.523602 199175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812233805-198261/id_rsa Username:docker}
I0812 23:38:48.524158 199175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812233805-198261/id_rsa Username:docker}
I0812 23:38:48.528922 199175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812233805-198261/id_rsa Username:docker}
I0812 23:38:48.559404 199175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12230-195300-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812233805-198261/id_rsa Username:docker}
I0812 23:38:48.766575 199175 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
I0812 23:38:48.766605 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
I0812 23:38:48.841532 199175 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0812 23:38:48.844744 199175 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0812 23:38:48.844771 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0812 23:38:48.953678 199175 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
I0812 23:38:48.953705 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
I0812 23:38:48.955501 199175 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
I0812 23:38:48.955526 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
I0812 23:38:48.960268 199175 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0812 23:38:48.960285 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
I0812 23:38:49.040749 199175 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
I0812 23:38:49.040779 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0812 23:38:49.041677 199175 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0812 23:38:49.041700 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0812 23:38:49.047641 199175 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0812 23:38:49.055940 199175 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
I0812 23:38:49.055964 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
I0812 23:38:49.063903 199175 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
I0812 23:38:49.063930 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
I0812 23:38:49.141627 199175 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
I0812 23:38:49.141654 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
I0812 23:38:49.152105 199175 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
I0812 23:38:49.161918 199175 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0812 23:38:49.161999 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
I0812 23:38:49.162111 199175 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0812 23:38:49.162134 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
I0812 23:38:49.164274 199175 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
I0812 23:38:49.341271 199175 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0812 23:38:49.341299 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
I0812 23:38:49.356358 199175 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0812 23:38:49.357714 199175 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0812 23:38:49.357737 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
I0812 23:38:49.357802 199175 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
I0812 23:38:49.357812 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
I0812 23:38:49.362451 199175 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0812 23:38:49.362472 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
I0812 23:38:49.558241 199175 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
I0812 23:38:49.642310 199175 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0812 23:38:49.642394 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
I0812 23:38:49.656935 199175 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0812 23:38:49.656971 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
I0812 23:38:49.658601 199175 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0812 23:38:49.744450 199175 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.310531397s)
I0812 23:38:49.744488 199175 start.go:736] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
I0812 23:38:49.763429 199175 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0812 23:38:49.763455 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
I0812 23:38:49.841049 199175 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0812 23:38:50.046456 199175 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0812 23:38:50.046548 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
I0812 23:38:50.351031 199175 pod_ready.go:102] pod "coredns-558bd4d5db-6chdq" in "kube-system" namespace has status "Ready":"False"
I0812 23:38:50.543163 199175 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0812 23:38:50.543219 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
I0812 23:38:50.648205 199175 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.806628915s)
I0812 23:38:50.652580 199175 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.604864121s)
I0812 23:38:50.763213 199175 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0812 23:38:50.763307 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
I0812 23:38:51.061623 199175 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0812 23:38:51.061651 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
I0812 23:38:51.349894 199175 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
I0812 23:38:51.350005 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
I0812 23:38:51.641228 199175 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0812 23:38:51.641259 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
I0812 23:38:51.844662 199175 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
I0812 23:38:51.844746 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
I0812 23:38:52.045124 199175 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0812 23:38:52.045205 199175 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0812 23:38:52.155811 199175 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0812 23:38:52.365342 199175 pod_ready.go:102] pod "coredns-558bd4d5db-6chdq" in "kube-system" namespace has status "Ready":"False"
I0812 23:38:52.751602 199175 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (3.599405991s)
I0812 23:38:52.751724 199175 addons.go:313] Verifying addon ingress=true in "addons-20210812233805-198261"
I0812 23:38:52.753905 199175 out.go:177] * Verifying ingress addon...
I0812 23:38:52.756296 199175 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0812 23:38:52.850422 199175 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0812 23:38:52.850455 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:38:53.358688 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:38:53.859283 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:38:54.445446 199175 pod_ready.go:102] pod "coredns-558bd4d5db-6chdq" in "kube-system" namespace has status "Ready":"False"
I0812 23:38:54.456296 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:38:54.862991 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:38:55.352481 199175 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (6.188175208s)
W0812 23:38:55.352523 199175 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
namespace/olm created
namespace/operators created
serviceaccount/olm-operator-serviceaccount created
clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
deployment.apps/olm-operator created
deployment.apps/catalog-operator created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
stderr:
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
I0812 23:38:55.352542 199175 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
namespace/olm created
namespace/operators created
serviceaccount/olm-operator-serviceaccount created
clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
deployment.apps/olm-operator created
deployment.apps/catalog-operator created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
stderr:
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
I0812 23:38:55.352697 199175 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.794422061s)
I0812 23:38:55.352818 199175 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.996231659s)
I0812 23:38:55.352960 199175 addons.go:313] Verifying addon registry=true in "addons-20210812233805-198261"
I0812 23:38:55.352896 199175 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.694210173s)
I0812 23:38:55.353099 199175 addons.go:313] Verifying addon metrics-server=true in "addons-20210812233805-198261"
I0812 23:38:55.354758 199175 out.go:177] * Verifying registry addon...
I0812 23:38:55.353586 199175 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.512490963s)
W0812 23:38:55.354945 199175 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
I0812 23:38:55.355067 199175 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
I0812 23:38:55.355892 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:38:55.357299 199175 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0812 23:38:55.362689 199175 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0812 23:38:55.362712 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:38:55.629581 199175 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
I0812 23:38:55.716127 199175 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0812 23:38:55.859057 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:38:55.961127 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:38:56.451421 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:38:56.456920 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:38:56.847790 199175 pod_ready.go:102] pod "coredns-558bd4d5db-6chdq" in "kube-system" namespace has status "Ready":"False"
I0812 23:38:56.855554 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:38:56.960055 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:38:57.444285 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:38:57.447526 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:38:57.855577 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:38:57.964260 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:38:58.360955 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:38:58.451441 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:38:58.662874 199175 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.507005713s)
I0812 23:38:58.662913 199175 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210812233805-198261"
I0812 23:38:58.665631 199175 out.go:177] * Verifying csi-hostpath-driver addon...
I0812 23:38:58.667660 199175 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0812 23:38:58.745462 199175 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0812 23:38:58.745495 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:38:58.851312 199175 pod_ready.go:102] pod "coredns-558bd4d5db-6chdq" in "kube-system" namespace has status "Ready":"False"
I0812 23:38:58.855627 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:38:58.867467 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:38:59.255879 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:38:59.354628 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:38:59.449684 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:38:59.757260 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:38:59.855097 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:38:59.951534 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:00.251438 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:00.356236 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:00.443028 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:00.763914 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:00.860397 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:00.942224 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:01.256474 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:01.359573 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:01.446719 199175 pod_ready.go:102] pod "coredns-558bd4d5db-6chdq" in "kube-system" namespace has status "Ready":"False"
I0812 23:39:01.451427 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:01.943833 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:01.944966 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:01.946225 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:02.147136 199175 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (6.517512058s)
I0812 23:39:02.147319 199175 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.431144672s)
I0812 23:39:02.252066 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:02.354559 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:02.367651 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:02.752696 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:02.855025 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:02.865936 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:03.251895 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:03.356767 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:03.367024 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:03.751372 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:03.845517 199175 pod_ready.go:102] pod "coredns-558bd4d5db-6chdq" in "kube-system" namespace has status "Ready":"False"
I0812 23:39:03.854365 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:03.868228 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:04.250775 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:04.354846 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:04.367830 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:04.752067 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:04.854618 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:04.868890 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:05.250667 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:05.344568 199175 pod_ready.go:97] error getting pod "coredns-558bd4d5db-6chdq" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-6chdq" not found
I0812 23:39:05.344600 199175 pod_ready.go:81] duration metric: took 17.024214859s waiting for pod "coredns-558bd4d5db-6chdq" in "kube-system" namespace to be "Ready" ...
E0812 23:39:05.344611 199175 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-6chdq" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-6chdq" not found
I0812 23:39:05.344619 199175 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-wmflx" in "kube-system" namespace to be "Ready" ...
I0812 23:39:05.350192 199175 pod_ready.go:92] pod "coredns-558bd4d5db-wmflx" in "kube-system" namespace has status "Ready":"True"
I0812 23:39:05.350213 199175 pod_ready.go:81] duration metric: took 5.585933ms waiting for pod "coredns-558bd4d5db-wmflx" in "kube-system" namespace to be "Ready" ...
I0812 23:39:05.350225 199175 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210812233805-198261" in "kube-system" namespace to be "Ready" ...
I0812 23:39:05.354448 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:05.355797 199175 pod_ready.go:92] pod "etcd-addons-20210812233805-198261" in "kube-system" namespace has status "Ready":"True"
I0812 23:39:05.355826 199175 pod_ready.go:81] duration metric: took 5.590383ms waiting for pod "etcd-addons-20210812233805-198261" in "kube-system" namespace to be "Ready" ...
I0812 23:39:05.355841 199175 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210812233805-198261" in "kube-system" namespace to be "Ready" ...
I0812 23:39:05.360769 199175 pod_ready.go:92] pod "kube-apiserver-addons-20210812233805-198261" in "kube-system" namespace has status "Ready":"True"
I0812 23:39:05.360791 199175 pod_ready.go:81] duration metric: took 4.938477ms waiting for pod "kube-apiserver-addons-20210812233805-198261" in "kube-system" namespace to be "Ready" ...
I0812 23:39:05.360808 199175 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210812233805-198261" in "kube-system" namespace to be "Ready" ...
I0812 23:39:05.365285 199175 pod_ready.go:92] pod "kube-controller-manager-addons-20210812233805-198261" in "kube-system" namespace has status "Ready":"True"
I0812 23:39:05.365308 199175 pod_ready.go:81] duration metric: took 4.489729ms waiting for pod "kube-controller-manager-addons-20210812233805-198261" in "kube-system" namespace to be "Ready" ...
I0812 23:39:05.365323 199175 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ghhxv" in "kube-system" namespace to be "Ready" ...
I0812 23:39:05.366764 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:05.545563 199175 pod_ready.go:92] pod "kube-proxy-ghhxv" in "kube-system" namespace has status "Ready":"True"
I0812 23:39:05.545598 199175 pod_ready.go:81] duration metric: took 180.265559ms waiting for pod "kube-proxy-ghhxv" in "kube-system" namespace to be "Ready" ...
I0812 23:39:05.545614 199175 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210812233805-198261" in "kube-system" namespace to be "Ready" ...
I0812 23:39:05.751935 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:05.854703 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:05.867376 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:05.945298 199175 pod_ready.go:92] pod "kube-scheduler-addons-20210812233805-198261" in "kube-system" namespace has status "Ready":"True"
I0812 23:39:05.945332 199175 pod_ready.go:81] duration metric: took 399.707304ms waiting for pod "kube-scheduler-addons-20210812233805-198261" in "kube-system" namespace to be "Ready" ...
I0812 23:39:05.945347 199175 pod_ready.go:38] duration metric: took 17.63985873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0812 23:39:05.945376 199175 api_server.go:50] waiting for apiserver process to appear ...
I0812 23:39:05.945435 199175 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0812 23:39:06.066088 199175 api_server.go:70] duration metric: took 17.792893063s to wait for apiserver process to appear ...
I0812 23:39:06.066118 199175 api_server.go:86] waiting for apiserver healthz status ...
I0812 23:39:06.066132 199175 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0812 23:39:06.149106 199175 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
ok
I0812 23:39:06.150194 199175 api_server.go:139] control plane version: v1.21.3
I0812 23:39:06.150217 199175 api_server.go:129] duration metric: took 84.092391ms to wait for apiserver health ...
I0812 23:39:06.150228 199175 system_pods.go:43] waiting for kube-system pods to appear ...
I0812 23:39:06.158467 199175 system_pods.go:59] 18 kube-system pods found
I0812 23:39:06.158513 199175 system_pods.go:61] "coredns-558bd4d5db-wmflx" [3106eabb-13b3-4f2b-bcca-3534861aa655] Running
I0812 23:39:06.158525 199175 system_pods.go:61] "csi-hostpath-attacher-0" [234256a6-f1f2-449f-aa0f-976951566fe4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0812 23:39:06.158535 199175 system_pods.go:61] "csi-hostpath-provisioner-0" [2d70ea1d-5713-4581-9b05-361c1e1d15de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
I0812 23:39:06.158548 199175 system_pods.go:61] "csi-hostpath-resizer-0" [5bc5b066-56de-4de4-80c0-998515045f37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0812 23:39:06.158558 199175 system_pods.go:61] "csi-hostpath-snapshotter-0" [a7bc224f-c759-4992-978e-28fd971f7c14] Pending / Ready:ContainersNotReady (containers with unready status: [csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-snapshotter])
I0812 23:39:06.158571 199175 system_pods.go:61] "csi-hostpathplugin-0" [6d4de853-7a79-4330-9778-be8c43160aef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
I0812 23:39:06.158585 199175 system_pods.go:61] "etcd-addons-20210812233805-198261" [0d4f6b1d-3d3b-4f5f-9b48-953604cbfd4a] Running
I0812 23:39:06.158595 199175 system_pods.go:61] "kube-apiserver-addons-20210812233805-198261" [4dc6f43d-5d08-4dea-9d8c-7e0bd7f9f54a] Running
I0812 23:39:06.158606 199175 system_pods.go:61] "kube-controller-manager-addons-20210812233805-198261" [9130441e-0201-4ff9-afa5-8153f939ea3f] Running
I0812 23:39:06.158613 199175 system_pods.go:61] "kube-proxy-ghhxv" [4499d272-579a-4a41-853e-5e8cbe70fc9f] Running
I0812 23:39:06.158620 199175 system_pods.go:61] "kube-scheduler-addons-20210812233805-198261" [16247d6d-599c-4f48-9742-721fc2fab228] Running
I0812 23:39:06.158631 199175 system_pods.go:61] "metrics-server-77c99ccb96-vjs6v" [4e2f321a-2744-4fbc-bb5f-bb37186d9320] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0812 23:39:06.158638 199175 system_pods.go:61] "registry-6xv8h" [6ab43726-8cdd-4125-a057-f9f01254017e] Running
I0812 23:39:06.158648 199175 system_pods.go:61] "registry-proxy-99l9s" [1970e154-033a-4ce7-9064-30f48936f4b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0812 23:39:06.158660 199175 system_pods.go:61] "snapshot-controller-989f9ddc8-sph8q" [69957ffb-86fa-4f23-8398-7cef2d3ef491] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0812 23:39:06.158681 199175 system_pods.go:61] "snapshot-controller-989f9ddc8-xbmqt" [c75a8161-998a-4e14-b0ed-291284de0812] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0812 23:39:06.158690 199175 system_pods.go:61] "storage-provisioner" [2f1892a7-3bc8-4aa8-9962-d9b4f9af38af] Running
I0812 23:39:06.158704 199175 system_pods.go:61] "tiller-deploy-768d69497-xf4kk" [f6c2788f-feb4-412c-a2e7-871e85ca30c4] Running / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
I0812 23:39:06.158714 199175 system_pods.go:74] duration metric: took 8.477596ms to wait for pod list to return data ...
I0812 23:39:06.158731 199175 default_sa.go:34] waiting for default service account to be created ...
I0812 23:39:06.251154 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:06.344948 199175 default_sa.go:45] found service account: "default"
I0812 23:39:06.344974 199175 default_sa.go:55] duration metric: took 186.224633ms for default service account to be created ...
I0812 23:39:06.344983 199175 system_pods.go:116] waiting for k8s-apps to be running ...
I0812 23:39:06.354306 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:06.366693 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:06.551064 199175 system_pods.go:86] 18 kube-system pods found
I0812 23:39:06.551098 199175 system_pods.go:89] "coredns-558bd4d5db-wmflx" [3106eabb-13b3-4f2b-bcca-3534861aa655] Running
I0812 23:39:06.551113 199175 system_pods.go:89] "csi-hostpath-attacher-0" [234256a6-f1f2-449f-aa0f-976951566fe4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0812 23:39:06.551124 199175 system_pods.go:89] "csi-hostpath-provisioner-0" [2d70ea1d-5713-4581-9b05-361c1e1d15de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
I0812 23:39:06.551134 199175 system_pods.go:89] "csi-hostpath-resizer-0" [5bc5b066-56de-4de4-80c0-998515045f37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0812 23:39:06.551145 199175 system_pods.go:89] "csi-hostpath-snapshotter-0" [a7bc224f-c759-4992-978e-28fd971f7c14] Pending / Ready:ContainersNotReady (containers with unready status: [csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-snapshotter])
I0812 23:39:06.551159 199175 system_pods.go:89] "csi-hostpathplugin-0" [6d4de853-7a79-4330-9778-be8c43160aef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
I0812 23:39:06.551171 199175 system_pods.go:89] "etcd-addons-20210812233805-198261" [0d4f6b1d-3d3b-4f5f-9b48-953604cbfd4a] Running
I0812 23:39:06.551221 199175 system_pods.go:89] "kube-apiserver-addons-20210812233805-198261" [4dc6f43d-5d08-4dea-9d8c-7e0bd7f9f54a] Running
I0812 23:39:06.551235 199175 system_pods.go:89] "kube-controller-manager-addons-20210812233805-198261" [9130441e-0201-4ff9-afa5-8153f939ea3f] Running
I0812 23:39:06.551243 199175 system_pods.go:89] "kube-proxy-ghhxv" [4499d272-579a-4a41-853e-5e8cbe70fc9f] Running
I0812 23:39:06.551254 199175 system_pods.go:89] "kube-scheduler-addons-20210812233805-198261" [16247d6d-599c-4f48-9742-721fc2fab228] Running
I0812 23:39:06.551264 199175 system_pods.go:89] "metrics-server-77c99ccb96-vjs6v" [4e2f321a-2744-4fbc-bb5f-bb37186d9320] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0812 23:39:06.551277 199175 system_pods.go:89] "registry-6xv8h" [6ab43726-8cdd-4125-a057-f9f01254017e] Running
I0812 23:39:06.551287 199175 system_pods.go:89] "registry-proxy-99l9s" [1970e154-033a-4ce7-9064-30f48936f4b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0812 23:39:06.551299 199175 system_pods.go:89] "snapshot-controller-989f9ddc8-sph8q" [69957ffb-86fa-4f23-8398-7cef2d3ef491] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0812 23:39:06.551313 199175 system_pods.go:89] "snapshot-controller-989f9ddc8-xbmqt" [c75a8161-998a-4e14-b0ed-291284de0812] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0812 23:39:06.551320 199175 system_pods.go:89] "storage-provisioner" [2f1892a7-3bc8-4aa8-9962-d9b4f9af38af] Running
I0812 23:39:06.551330 199175 system_pods.go:89] "tiller-deploy-768d69497-xf4kk" [f6c2788f-feb4-412c-a2e7-871e85ca30c4] Running / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
I0812 23:39:06.551342 199175 system_pods.go:126] duration metric: took 206.354346ms to wait for k8s-apps to be running ...
I0812 23:39:06.551356 199175 system_svc.go:44] waiting for kubelet service to be running ....
I0812 23:39:06.551412 199175 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0812 23:39:06.563464 199175 system_svc.go:56] duration metric: took 12.096777ms WaitForService to wait for kubelet.
I0812 23:39:06.563495 199175 kubeadm.go:547] duration metric: took 18.290309817s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0812 23:39:06.563524 199175 node_conditions.go:102] verifying NodePressure condition ...
I0812 23:39:06.746566 199175 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
I0812 23:39:06.746597 199175 node_conditions.go:123] node cpu capacity is 8
I0812 23:39:06.746613 199175 node_conditions.go:105] duration metric: took 183.08369ms to run NodePressure ...
I0812 23:39:06.746627 199175 start.go:231] waiting for startup goroutines ...
I0812 23:39:06.751175 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:06.857238 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:06.868540 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:07.251651 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:07.354485 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:07.368399 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:07.752571 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:07.854766 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:07.867477 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:08.252352 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:08.355487 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:08.366828 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:08.752390 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:08.855608 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:08.867439 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:09.252463 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:09.354735 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:09.367628 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:09.752431 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:09.855270 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:09.867329 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:10.252011 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:10.354379 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:10.371234 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:10.751745 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:10.855329 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:10.868106 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:11.251164 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:11.354798 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:11.367166 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:11.751546 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:11.854302 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:11.981296 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:12.251464 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:12.355310 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:12.367327 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:12.751167 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:12.856598 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:12.866867 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:13.251532 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:13.354747 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:13.366926 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:13.765621 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:13.854237 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:13.867649 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0812 23:39:14.250931 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:14.355068 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:14.367439 199175 kapi.go:108] duration metric: took 19.010136213s to wait for kubernetes.io/minikube-addons=registry ...
I0812 23:39:14.754960 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:14.854610 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:15.251251 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:15.355164 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:15.755257 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:15.856891 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:16.252537 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:16.354447 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:16.751585 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:16.856455 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:17.251665 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:17.354982 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:17.751161 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:17.855330 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:18.251554 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:18.354683 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:18.752118 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:18.854661 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:19.251957 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:19.355162 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:19.751928 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:19.854891 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:20.251916 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:20.354349 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:20.750848 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:20.854852 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:21.251510 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:21.355070 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:21.754371 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:21.854636 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:22.444409 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:22.444856 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:22.753033 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:22.854374 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:23.255216 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:23.355476 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:23.751300 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:23.857266 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:24.250446 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:24.354742 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:24.751595 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:24.855030 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:25.252539 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:25.355336 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:25.751004 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:25.854730 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:26.252273 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:26.355062 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:26.752315 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:26.854314 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:27.251128 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:27.354925 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:27.750558 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:27.855014 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:28.251308 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:28.355023 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:28.751419 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:28.855076 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:29.251894 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:29.354404 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:29.751415 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:29.854855 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:30.250534 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:30.354904 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:30.750492 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:30.855496 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:31.250776 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:31.354600 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:31.751394 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:31.853782 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:32.250828 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:32.353584 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:32.750427 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:32.854233 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:33.251705 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:33.354462 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:33.752288 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:33.855568 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:34.251787 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:34.354890 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:34.755338 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:34.857605 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:35.253244 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:35.356210 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:35.842707 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:35.862156 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:36.251427 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:36.354280 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:36.750657 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:36.854256 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:37.251761 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:37.354792 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:37.750796 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:37.854547 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:38.251597 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:38.354886 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:38.751420 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:38.853873 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:39.250403 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:39.354454 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:39.751397 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:39.853773 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:40.250962 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:40.354316 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:40.751638 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:40.854613 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:41.301241 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:41.358627 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:41.750655 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:41.854463 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:42.264899 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:42.357820 199175 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0812 23:39:42.752555 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:42.854462 199175 kapi.go:108] duration metric: took 50.098154325s to wait for app.kubernetes.io/name=ingress-nginx ...
I0812 23:39:43.254856 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:43.757175 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:44.252595 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:44.752743 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:45.252541 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:45.758908 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:46.255062 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:46.945897 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:47.252243 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:47.754904 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:48.264989 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:48.753699 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:49.252066 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:49.753415 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:50.255836 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:50.753492 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:51.253202 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:51.752003 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:52.948040 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:53.252157 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:53.754400 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:54.252651 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:54.751858 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:55.253154 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:55.753224 199175 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0812 23:39:56.252268 199175 kapi.go:108] duration metric: took 57.584602349s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0812 23:39:56.254650 199175 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, helm-tiller, metrics-server, olm, volumesnapshots, registry, ingress, csi-hostpath-driver
I0812 23:39:56.254683 199175 addons.go:344] enableAddons completed in 1m7.981469703s
I0812 23:39:56.310191 199175 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
I0812 23:39:56.312429 199175 out.go:177] * Done! kubectl is now configured to use "addons-20210812233805-198261" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Logs begin at Thu 2021-08-12 23:38:08 UTC, end at Thu 2021-08-12 23:46:57 UTC. --
Aug 12 23:40:15 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:15.958775214Z" level=info msg="ignoring event" container=1fd7b434e3577696815b52e28b6adc3a441f75e32d02543fa9823c2c366a2db8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:16 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:16.846268570Z" level=warning msg="reference for unknown type: " digest="sha256:c407ad6ee97d8a0e8a21c713e2d9af66aaf73315e4a123874c00b786f962f3cd" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:c407ad6ee97d8a0e8a21c713e2d9af66aaf73315e4a123874c00b786f962f3cd"
Aug 12 23:40:16 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:16.924233949Z" level=info msg="ignoring event" container=de6a655efe48db4b2207df9d3ce09c80aa6005927fa9b64acd030b3229502604 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:24 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:24.678144533Z" level=info msg="ignoring event" container=37a0544ccb39d794fbf7c3be231ea2fd9580cb8ddd98d5712fb081f0b5ff3840 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:24 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:24.856701859Z" level=info msg="ignoring event" container=21baad333c48f3132441010b2a00581df681c54d00e3584f712c01f5cf015541 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:40 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:40.157759288Z" level=info msg="ignoring event" container=6ad121642c4b1e5dcd3be2c8df391191125e068ff5ad27b98c688d594024ec48 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:41 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:41.203850219Z" level=info msg="ignoring event" container=832a42274017e1bb1fc78fdb0d99e9ad69f5f2effca036f1f5dfb60e5cb10709 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:41 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:41.449383446Z" level=info msg="ignoring event" container=5f52700d4e393803f0d4d0e0569c3c7fa6846dcf06fa7824c3ca6b962e57b9f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:41 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:41.556300769Z" level=info msg="ignoring event" container=4092208f60457317eb7edf4d1c38126e1a3035a94bc19d75303af3ec179e267d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:41 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:41.761911050Z" level=info msg="ignoring event" container=373922d0eed9ed562e3169dcdef26acfe80640e940c118f07d3533427bb7dd1b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:41 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:41.884852870Z" level=info msg="ignoring event" container=a41e334c91d3a105ecbe7a35fbd031e560d03dc8913fd8bb5e6153133f08c03f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:41 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:41.944566214Z" level=info msg="ignoring event" container=09ed0b3e266c678c28779d1b4e37583590ba69503e3ea2bfefe06bc85e597443 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:41 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:41.995464271Z" level=info msg="ignoring event" container=1ef0cdedd7ca87a3be28fef2688079efd6037e79317f2f2b81333df6585cab26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:46 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:46.481258034Z" level=info msg="ignoring event" container=878b8bfab7169c22b3a1646e0b68e30b17c63797dd3302472c312b6c2b591acb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:46 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:46.597716910Z" level=info msg="ignoring event" container=fd22b82d955413eebb32cd7b7cf2b53d020948c5211f2f94774dca8dd0340236 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:49 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:49.858040580Z" level=info msg="ignoring event" container=78a1bf6813a43c4b8675636d2b70f914f421ba2c0cb4bdf7b8c5eef8934ad5ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:50 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:50.819267102Z" level=info msg="ignoring event" container=151d5df7b12e93a6aaf29962a1a0b60895d3701f85ff368df11bf308f11c5af5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:52 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:52.219213195Z" level=info msg="ignoring event" container=c19329f0d0baa024bc03962f627b488473fd92053e8a920a69ad255c0abe2967 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:52 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:52.932193106Z" level=info msg="ignoring event" container=e9bebe4dea69788f48750e55802c9dcd77236742d3ce6503d3238f0470c895fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:53 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:53.357358562Z" level=info msg="ignoring event" container=2e0d6cf16bbb270ab8a2deb70710eddcd0f6a1abdc7c54226cab769fb3af9e65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:53 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:53.451537180Z" level=info msg="ignoring event" container=00221c4c2036ee0f2a9b2a12eb0b778507daec21cd630b924d8e346b08deb0f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:56 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:56.301287856Z" level=info msg="ignoring event" container=853b051e1a5447a4001a80362541c99f2e375b244e7c7a7d6f026df3ddfdbd81 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:40:56 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:40:56.421090565Z" level=info msg="ignoring event" container=8aaa91595f94c4b506299696827c3458906cb14d52c878db1f7248465f26ebd8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 23:41:28 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:41:28.102283577Z" level=warning msg="reference for unknown type: " digest="sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b" remote="quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b"
Aug 12 23:41:28 addons-20210812233805-198261 dockerd[456]: time="2021-08-12T23:41:28.233021680Z" level=warning msg="Error persisting manifest" digest="sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:ac5496c52004b990f981ee419eb338921753c35db97e382496b21f15c7d23792, expected sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b: failed precondition" remote="quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
ac61f52487c44 9d5c51d92fbdd 5 minutes ago Running etcd-restore-operator 0 2e85274474a3a
b0f3257a07c3a 9d5c51d92fbdd 5 minutes ago Running etcd-backup-operator 0 2e85274474a3a
f0e2b2bd3a198 quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b 5 minutes ago Running etcd-operator 0 2e85274474a3a
5184903658e6e europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8 6 minutes ago Running private-image-eu 0 0d9bfc9a118ea
4b28715c7862e us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8 6 minutes ago Running private-image 0 cfb9f23e8c6bc
9a29d1c12a233 nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce 6 minutes ago Running nginx 0 6a10639063328
1aaaf043fedb5 busybox@sha256:bda689514be526d9557ad442312e5d541757c453c50b8cf2ae68597c291385a1 6 minutes ago Running busybox 0 07f6d272965d5
27bf3e514b44a k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 7 minutes ago Running liveness-probe 0 a8b3ef3a92758
ef3b0d8560cf3 k8s.gcr.io/sig-storage/hostpathplugin@sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659 7 minutes ago Running hostpath 0 a8b3ef3a92758
c214dd1217f83 k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108 7 minutes ago Running node-driver-registrar 0 a8b3ef3a92758
3f54e5d9a63e5 quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607 7 minutes ago Running packageserver 0 29ef2f466d4f7
7f9ba7e13bed5 quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607 7 minutes ago Running packageserver 0 92b8234867560
54db004626441 k8s.gcr.io/sig-storage/csi-external-health-monitor-controller@sha256:14988b598a180cc0282f3f4bc982371baf9a9c9b80878fb385f8ae8bd04ecf16 7 minutes ago Running csi-external-health-monitor-controller 0 a8b3ef3a92758
26e260f19c9a6 quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0 7 minutes ago Running registry-server 0 cfffd849a2d02
d5e3c84c90239 k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782 7 minutes ago Running csi-snapshotter 0 42682ce33d3e7
23ca18a80907d k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09 7 minutes ago Running csi-attacher 0 504d943ae07e5
ad57fe8606174 k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a 7 minutes ago Running csi-resizer 0 3e6ab84ed244a
0fbf3b7ab55bb k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 7 minutes ago Running csi-provisioner 0 c788e2db9273a
aa3dafd56c111 k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4 7 minutes ago Running volume-snapshot-controller 0 68148cbf35268
c94ac94581072 k8s.gcr.io/sig-storage/csi-external-health-monitor-agent@sha256:c20d4a4772599e68944452edfcecc944a1df28c19e94b942d526ca25a522ea02 7 minutes ago Running csi-external-health-monitor-agent 0 a8b3ef3a92758
e4294facbd56d quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607 7 minutes ago Running catalog-operator 0 b40a8aea08bcd
690734df52e7d quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607 7 minutes ago Running olm-operator 0 a285aff50a725
b0e5bb3b4dbd3 k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4 7 minutes ago Running volume-snapshot-controller 0 a3b43cb2c174a
0566e40e5b992 6e38f40d628db 8 minutes ago Running storage-provisioner 0 aeb76b15a4565
0c7b680bd45f3 296a6d5035e2d 8 minutes ago Running coredns 0 7cd681ba6198a
500c496c4b0ed adb2816ea823a 8 minutes ago Running kube-proxy 0 dbb19d2f59a47
9cf8b2804c090 bc2bb319a7038 8 minutes ago Running kube-controller-manager 0 c7c229dd027c1
f3719dd1e4d05 6be0dc1302e30 8 minutes ago Running kube-scheduler 0 a32c2acaf9c8b
19129c370aac4 0369cf4303ffd 8 minutes ago Running etcd 0 ede2ddeb97b08
6a3dc6033a696 3d174f00aa39e 8 minutes ago Running kube-apiserver 0 a44e6d7d97b56
*
* ==> coredns [0c7b680bd45f] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.0
linux/amd64, go1.15.3, 054c9ae
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
[INFO] Reloading complete
E0812 23:38:51.345090 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
E0812 23:38:51.345094 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
E0812 23:38:51.345562 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
*
* ==> describe nodes <==
* Name: addons-20210812233805-198261
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-20210812233805-198261
kubernetes.io/os=linux
minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19
minikube.k8s.io/name=addons-20210812233805-198261
minikube.k8s.io/updated_at=2021_08_12T23_38_34_0700
minikube.k8s.io/version=v1.22.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-20210812233805-198261
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-20210812233805-198261"}
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 12 Aug 2021 23:38:32 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-20210812233805-198261
AcquireTime: <unset>
RenewTime: Thu, 12 Aug 2021 23:46:56 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 12 Aug 2021 23:46:39 +0000 Thu, 12 Aug 2021 23:38:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 12 Aug 2021 23:46:39 +0000 Thu, 12 Aug 2021 23:38:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 12 Aug 2021 23:46:39 +0000 Thu, 12 Aug 2021 23:38:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 12 Aug 2021 23:46:39 +0000 Thu, 12 Aug 2021 23:38:45 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-20210812233805-198261
Capacity:
cpu: 8
ephemeral-storage: 309568300Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32951368Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 309568300Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32951368Ki
pods: 110
System Info:
Machine ID: 760e67beb8554645829f2357c8eb4ae7
System UUID: d2ba194a-b5c6-436f-a0ec-db8fe24f91c8
Boot ID: 9ce94195-0b04-49bb-822e-acbe15980998
Kernel Version: 4.9.0-16-amd64
OS Image: Ubuntu 20.04.2 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.7
Kubelet Version: v1.21.3
Kube-Proxy Version: v1.21.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (25 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m38s
default nginx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m32s
default private-image-7ff9c8c74f-rrdjz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m28s
default private-image-eu-5956d58f9f-fdf9r 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m12s
default task-pv-pod-restore 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m2s
kube-system coredns-558bd4d5db-wmflx 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 8m10s
kube-system csi-hostpath-attacher-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m
kube-system csi-hostpath-provisioner-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m
kube-system csi-hostpath-resizer-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m59s
kube-system csi-hostpath-snapshotter-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m59s
kube-system csi-hostpathplugin-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m
kube-system etcd-addons-20210812233805-198261 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 8m22s
kube-system kube-apiserver-addons-20210812233805-198261 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m24s
kube-system kube-controller-manager-addons-20210812233805-198261 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m22s
kube-system kube-proxy-ghhxv 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m10s
kube-system kube-scheduler-addons-20210812233805-198261 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m22s
kube-system snapshot-controller-989f9ddc8-sph8q 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m3s
kube-system snapshot-controller-989f9ddc8-xbmqt 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m3s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m7s
my-etcd etcd-operator-85cd4f54cd-v6nj5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m30s
olm catalog-operator-75d496484d-6jdvj 10m (0%!)(MISSING) 0 (0%!)(MISSING) 80Mi (0%!)(MISSING) 0 (0%!)(MISSING) 8m2s
olm olm-operator-859c88c96-crfzm 10m (0%!)(MISSING) 0 (0%!)(MISSING) 160Mi (0%!)(MISSING) 0 (0%!)(MISSING) 8m3s
olm operatorhubio-catalog-bjxnr 10m (0%!)(MISSING) 0 (0%!)(MISSING) 50Mi (0%!)(MISSING) 0 (0%!)(MISSING) 7m36s
olm packageserver-6ff888cfc4-r5dbx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m35s
olm packageserver-6ff888cfc4-zgcwz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m35s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 780m (9%!)(MISSING) 0 (0%!)(MISSING)
memory 460Mi (1%!)(MISSING) 170Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m37s (x5 over 8m38s) kubelet Node addons-20210812233805-198261 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m37s (x4 over 8m38s) kubelet Node addons-20210812233805-198261 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m37s (x4 over 8m38s) kubelet Node addons-20210812233805-198261 status is now: NodeHasSufficientPID
Normal Starting 8m22s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m22s kubelet Node addons-20210812233805-198261 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m22s kubelet Node addons-20210812233805-198261 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m22s kubelet Node addons-20210812233805-198261 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m22s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m12s kubelet Node addons-20210812233805-198261 status is now: NodeReady
Normal Starting 8m6s kube-proxy Starting kube-proxy.
*
* ==> dmesg <==
* [Aug12 23:21] cgroup: cgroup2: unknown option "nsdelegate"
[Aug12 23:22] cgroup: cgroup2: unknown option "nsdelegate"
[Aug12 23:23] cgroup: cgroup2: unknown option "nsdelegate"
[Aug12 23:24] cgroup: cgroup2: unknown option "nsdelegate"
[Aug12 23:25] cgroup: cgroup2: unknown option "nsdelegate"
[Aug12 23:26] cgroup: cgroup2: unknown option "nsdelegate"
[ +58.019917] cgroup: cgroup2: unknown option "nsdelegate"
[Aug12 23:27] cgroup: cgroup2: unknown option "nsdelegate"
[Aug12 23:28] cgroup: cgroup2: unknown option "nsdelegate"
[Aug12 23:29] cgroup: cgroup2: unknown option "nsdelegate"
[Aug12 23:30] cgroup: cgroup2: unknown option "nsdelegate"
[Aug12 23:31] cgroup: cgroup2: unknown option "nsdelegate"
[ +47.099724] cgroup: cgroup2: unknown option "nsdelegate"
[Aug12 23:32] cgroup: cgroup2: unknown option "nsdelegate"
[Aug12 23:33] cgroup: cgroup2: unknown option "nsdelegate"
[Aug12 23:34] cgroup: cgroup2: unknown option "nsdelegate"
[ +34.079436] cgroup: cgroup2: unknown option "nsdelegate"
[Aug12 23:35] cgroup: cgroup2: unknown option "nsdelegate"
[Aug12 23:36] cgroup: cgroup2: unknown option "nsdelegate"
[Aug12 23:38] cgroup: cgroup2: unknown option "nsdelegate"
*
* ==> etcd [19129c370aac] <==
* 2021-08-12 23:42:52.896696 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:43:02.897212 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:43:12.896604 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:43:22.896705 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:43:32.897301 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:43:42.897228 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:43:52.897525 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:44:02.896818 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:44:12.896962 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:44:22.896732 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:44:32.896944 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:44:42.896561 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:44:52.897208 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:45:02.896792 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:45:12.896858 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:45:22.896787 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:45:32.897225 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:45:42.896771 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:45:52.897093 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:46:02.896862 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:46:12.897271 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:46:22.896453 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:46:32.896414 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:46:42.897347 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-12 23:46:52.896979 I | etcdserver/api/etcdhttp: /health OK (status code 200)
*
* ==> etcd [ac61f52487c4] <==
* time="2021-08-12T23:41:32Z" level=info msg="Go Version: go1.11.5"
time="2021-08-12T23:41:32Z" level=info msg="Go OS/Arch: linux/amd64"
time="2021-08-12T23:41:32Z" level=info msg="etcd-restore-operator Version: 0.9.4"
time="2021-08-12T23:41:32Z" level=info msg="Git SHA: c8a1c64"
E0812 23:41:32.192734 1 leaderelection.go:274] error initially creating leader election record: endpoints "etcd-restore-operator" already exists
E0812 23:41:35.650567 1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-restore-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"4c1984cf-4293-492f-9038-51d2dc4aea5b", ResourceVersion:"2123", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764408492, loc:(*time.Location)(0x24e11a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"etcd-operator-alm-owned"}, Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-v6nj5\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-12T23:41:35Z\",\"renewTime\":\"2021-08-12T23:41:35Z\",\"leaderTransitions\":1}", "endpoints.kubernetes.io/last-change-trigger-time":"2021-08-12T23:41:32Z"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), Cl
usterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-v6nj5 became leader'
time="2021-08-12T23:41:35Z" level=info msg="listening on 0.0.0.0:19999"
time="2021-08-12T23:41:35Z" level=info msg="starting restore controller" pkg=controller
*
* ==> etcd [b0f3257a07c3] <==
* time="2021-08-12T23:41:31Z" level=info msg="Go Version: go1.11.5"
time="2021-08-12T23:41:31Z" level=info msg="Go OS/Arch: linux/amd64"
time="2021-08-12T23:41:31Z" level=info msg="etcd-backup-operator Version: 0.9.4"
time="2021-08-12T23:41:31Z" level=info msg="Git SHA: c8a1c64"
E0812 23:41:31.780078 1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-backup-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"d9e2f963-1994-4a01-9558-f3a2d994f62a", ResourceVersion:"1988", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764408491, loc:(*time.Location)(0x25824c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-v6nj5\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-12T23:41:31Z\",\"renewTime\":\"2021-08-12T23:41:31Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Wil
l not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-v6nj5 became leader'
time="2021-08-12T23:41:31Z" level=info msg="starting backup controller" pkg=controller
*
* ==> etcd [f0e2b2bd3a19] <==
* time="2021-08-12T23:41:31Z" level=info msg="etcd-operator Version: 0.9.4"
time="2021-08-12T23:41:31Z" level=info msg="Git SHA: c8a1c64"
time="2021-08-12T23:41:31Z" level=info msg="Go Version: go1.11.5"
time="2021-08-12T23:41:31Z" level=info msg="Go OS/Arch: linux/amd64"
E0812 23:41:31.514867 1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"953e3df3-c3de-4eb7-81ef-d714991a755f", ResourceVersion:"1984", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764408491, loc:(*time.Location)(0x20d4640)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-v6nj5\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-12T23:41:31Z\",\"renewTime\":\"2021-08-12T23:41:31Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not r
eport event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-v6nj5 became leader'
*
* ==> kernel <==
* 23:46:57 up 3:29, 0 users, load average: 0.23, 2.97, 3.29
Linux addons-20210812233805-198261 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"
*
* ==> kube-apiserver [6a3dc6033a69] <==
* I0812 23:41:35.670764 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
I0812 23:41:58.089723 1 client.go:360] parsed scheme: "passthrough"
I0812 23:41:58.089766 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0812 23:41:58.089779 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0812 23:42:31.029266 1 client.go:360] parsed scheme: "passthrough"
I0812 23:42:31.029311 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0812 23:42:31.029327 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0812 23:43:10.051227 1 client.go:360] parsed scheme: "passthrough"
I0812 23:43:10.051279 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0812 23:43:10.051288 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0812 23:43:46.761175 1 client.go:360] parsed scheme: "passthrough"
I0812 23:43:46.761223 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0812 23:43:46.761232 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0812 23:44:31.537604 1 client.go:360] parsed scheme: "passthrough"
I0812 23:44:31.537655 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0812 23:44:31.537664 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0812 23:45:11.507473 1 client.go:360] parsed scheme: "passthrough"
I0812 23:45:11.507515 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0812 23:45:11.507523 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0812 23:45:46.344801 1 client.go:360] parsed scheme: "passthrough"
I0812 23:45:46.344855 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0812 23:45:46.344864 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0812 23:46:24.038995 1 client.go:360] parsed scheme: "passthrough"
I0812 23:46:24.039049 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0812 23:46:24.039061 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
*
* ==> kube-controller-manager [9cf8b2804c09] <==
* I0812 23:40:20.492348 1 event.go:291] "Event occurred" object="default/task-pv-pod" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-165fbf66-e46f-4797-87d7-11370d5ae5d9\" "
I0812 23:40:29.280297 1 event.go:291] "Event occurred" object="default/private-image" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set private-image-7ff9c8c74f to 1"
I0812 23:40:29.290089 1 event.go:291] "Event occurred" object="default/private-image-7ff9c8c74f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: private-image-7ff9c8c74f-rrdjz"
E0812 23:40:39.970074 1 tokens_controller.go:262] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-4wtrk" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
I0812 23:40:45.498109 1 event.go:291] "Event occurred" object="default/private-image-eu" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set private-image-eu-5956d58f9f to 1"
I0812 23:40:45.505315 1 event.go:291] "Event occurred" object="default/private-image-eu-5956d58f9f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: private-image-eu-5956d58f9f-fdf9r"
I0812 23:40:47.379895 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-165fbf66-e46f-4797-87d7-11370d5ae5d9" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^a76b33aa-fbc6-11eb-b403-0242ac11000e") on node "addons-20210812233805-198261"
I0812 23:40:47.382336 1 operation_generator.go:1483] Verified volume is safe to detach for volume "pvc-165fbf66-e46f-4797-87d7-11370d5ae5d9" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^a76b33aa-fbc6-11eb-b403-0242ac11000e") on node "addons-20210812233805-198261"
I0812 23:40:47.924796 1 operation_generator.go:483] DetachVolume.Detach succeeded for volume "pvc-165fbf66-e46f-4797-87d7-11370d5ae5d9" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^a76b33aa-fbc6-11eb-b403-0242ac11000e") on node "addons-20210812233805-198261"
I0812 23:40:55.582815 1 event.go:291] "Event occurred" object="default/hpvc-restore" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
I0812 23:40:55.887261 1 reconciler.go:295] attacherDetacher.AttachVolume started for volume "pvc-884489b9-6b33-4e08-8e9d-b25a55ce9e62" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^bce1ff8b-fbc6-11eb-b403-0242ac11000e") from node "addons-20210812233805-198261"
I0812 23:40:56.458703 1 operation_generator.go:368] AttachVolume.Attach succeeded for volume "pvc-884489b9-6b33-4e08-8e9d-b25a55ce9e62" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^bce1ff8b-fbc6-11eb-b403-0242ac11000e") from node "addons-20210812233805-198261"
I0812 23:40:56.458870 1 event.go:291] "Event occurred" object="default/task-pv-pod-restore" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-884489b9-6b33-4e08-8e9d-b25a55ce9e62\" "
E0812 23:41:01.153729 1 tokens_controller.go:262] error synchronizing serviceaccount gcp-auth/default: secrets "default-token-86q8z" is forbidden: unable to create new content in namespace gcp-auth because it is being terminated
I0812 23:41:06.669405 1 namespace_controller.go:185] Namespace has been deleted gcp-auth
I0812 23:41:06.676572 1 namespace_controller.go:185] Namespace has been deleted ingress-nginx
I0812 23:41:27.056046 1 event.go:291] "Event occurred" object="my-etcd/etcd-operator" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set etcd-operator-85cd4f54cd to 1"
I0812 23:41:27.067567 1 event.go:291] "Event occurred" object="my-etcd/etcd-operator-85cd4f54cd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: etcd-operator-85cd4f54cd-v6nj5"
I0812 23:41:48.234927 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for etcdbackups.etcd.database.coreos.com
I0812 23:41:48.234993 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for etcdrestores.etcd.database.coreos.com
I0812 23:41:48.235031 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for etcdclusters.etcd.database.coreos.com
I0812 23:41:48.235098 1 shared_informer.go:240] Waiting for caches to sync for resource quota
I0812 23:41:48.336083 1 shared_informer.go:247] Caches are synced for resource quota
I0812 23:41:48.374063 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0812 23:41:48.374126 1 shared_informer.go:247] Caches are synced for garbage collector
*
* ==> kube-proxy [500c496c4b0e] <==
* I0812 23:38:50.965648 1 node.go:172] Successfully retrieved node IP: 192.168.49.2
I0812 23:38:50.965733 1 server_others.go:140] Detected node IP 192.168.49.2
W0812 23:38:50.965759 1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
I0812 23:38:51.352810 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I0812 23:38:51.352851 1 server_others.go:212] Using iptables Proxier.
I0812 23:38:51.352866 1 server_others.go:219] creating dualStackProxier for iptables.
W0812 23:38:51.352881 1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I0812 23:38:51.353196 1 server.go:643] Version: v1.21.3
I0812 23:38:51.354579 1 config.go:315] Starting service config controller
I0812 23:38:51.354602 1 shared_informer.go:240] Waiting for caches to sync for service config
I0812 23:38:51.354626 1 config.go:224] Starting endpoint slice config controller
I0812 23:38:51.354633 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
W0812 23:38:51.363054 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0812 23:38:51.451959 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
I0812 23:38:51.455505 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0812 23:38:51.455562 1 shared_informer.go:247] Caches are synced for service config
W0812 23:44:03.448936 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
*
* ==> kube-scheduler [f3719dd1e4d0] <==
* W0812 23:38:31.440626 1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0812 23:38:31.454837 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0812 23:38:31.455446 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0812 23:38:31.455489 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0812 23:38:31.455513 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0812 23:38:31.463215 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0812 23:38:31.463258 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0812 23:38:31.463517 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0812 23:38:31.463543 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0812 23:38:31.463622 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0812 23:38:31.463713 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0812 23:38:31.463788 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0812 23:38:31.463842 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0812 23:38:31.463846 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0812 23:38:31.463914 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0812 23:38:31.463970 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0812 23:38:31.464022 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0812 23:38:31.464680 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0812 23:38:31.465172 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0812 23:38:32.289582 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0812 23:38:32.303614 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0812 23:38:32.356887 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0812 23:38:32.375851 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0812 23:38:32.461712 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0812 23:38:34.356491 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Thu 2021-08-12 23:38:08 UTC, end at Thu 2021-08-12 23:46:57 UTC. --
Aug 12 23:44:00 addons-20210812233805-198261 kubelet[2478]: I0812 23:44:00.441733 2478 clientconn.go:948] ClientConn switching balancer to "pick_first"
Aug 12 23:44:00 addons-20210812233805-198261 kubelet[2478]: I0812 23:44:00.441798 2478 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
Aug 12 23:44:05 addons-20210812233805-198261 kubelet[2478]: I0812 23:44:05.277390 2478 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Aug 12 23:44:19 addons-20210812233805-198261 kubelet[2478]: I0812 23:44:19.277382 2478 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-rrdjz" secret="" err="secret \"gcp-auth\" not found"
Aug 12 23:44:29 addons-20210812233805-198261 kubelet[2478]: I0812 23:44:29.276484 2478 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
Aug 12 23:44:30 addons-20210812233805-198261 kubelet[2478]: I0812 23:44:30.277029 2478 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-fdf9r" secret="" err="secret \"gcp-auth\" not found"
Aug 12 23:45:06 addons-20210812233805-198261 kubelet[2478]: E0812 23:45:06.027575 2478 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/host-path/01204c8f-17b1-4895-b733-b2a3f783d95c-gcp-creds podName:01204c8f-17b1-4895-b733-b2a3f783d95c nodeName:}" failed. No retries permitted until 2021-08-12 23:47:08.027548388 +0000 UTC m=+513.549111714 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/01204c8f-17b1-4895-b733-b2a3f783d95c-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"01204c8f-17b1-4895-b733-b2a3f783d95c\") : hostPath type check failed: /var/lib/minikube/google_application_credentials.json is not a file"
Aug 12 23:45:08 addons-20210812233805-198261 kubelet[2478]: I0812 23:45:08.505056 2478 clientconn.go:106] parsed scheme: ""
Aug 12 23:45:08 addons-20210812233805-198261 kubelet[2478]: I0812 23:45:08.505088 2478 clientconn.go:106] scheme "" not registered, fallback to default scheme
Aug 12 23:45:08 addons-20210812233805-198261 kubelet[2478]: I0812 23:45:08.505149 2478 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi-hostpath/csi.sock <nil> 0 <nil>}] <nil> <nil>}
Aug 12 23:45:08 addons-20210812233805-198261 kubelet[2478]: I0812 23:45:08.505163 2478 clientconn.go:948] ClientConn switching balancer to "pick_first"
Aug 12 23:45:08 addons-20210812233805-198261 kubelet[2478]: I0812 23:45:08.505219 2478 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
Aug 12 23:45:15 addons-20210812233805-198261 kubelet[2478]: E0812 23:45:15.278400 2478 kubelet.go:1701] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[gcp-creds], unattached volumes=[kube-api-access-n9bg8 gcp-creds task-pv-storage]: timed out waiting for the condition" pod="default/task-pv-pod-restore"
Aug 12 23:45:15 addons-20210812233805-198261 kubelet[2478]: E0812 23:45:15.279117 2478 pod_workers.go:190] "Error syncing pod, skipping" err="unmounted volumes=[gcp-creds], unattached volumes=[kube-api-access-n9bg8 gcp-creds task-pv-storage]: timed out waiting for the condition" pod="default/task-pv-pod-restore" podUID=01204c8f-17b1-4895-b733-b2a3f783d95c
Aug 12 23:45:17 addons-20210812233805-198261 kubelet[2478]: I0812 23:45:17.277240 2478 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Aug 12 23:45:39 addons-20210812233805-198261 kubelet[2478]: I0812 23:45:39.276984 2478 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-rrdjz" secret="" err="secret \"gcp-auth\" not found"
Aug 12 23:45:51 addons-20210812233805-198261 kubelet[2478]: I0812 23:45:51.276794 2478 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
Aug 12 23:45:58 addons-20210812233805-198261 kubelet[2478]: I0812 23:45:58.277020 2478 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-fdf9r" secret="" err="secret \"gcp-auth\" not found"
Aug 12 23:46:32 addons-20210812233805-198261 kubelet[2478]: I0812 23:46:32.983924 2478 clientconn.go:106] parsed scheme: ""
Aug 12 23:46:32 addons-20210812233805-198261 kubelet[2478]: I0812 23:46:32.983951 2478 clientconn.go:106] scheme "" not registered, fallback to default scheme
Aug 12 23:46:32 addons-20210812233805-198261 kubelet[2478]: I0812 23:46:32.983999 2478 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi-hostpath/csi.sock <nil> 0 <nil>}] <nil> <nil>}
Aug 12 23:46:32 addons-20210812233805-198261 kubelet[2478]: I0812 23:46:32.984007 2478 clientconn.go:948] ClientConn switching balancer to "pick_first"
Aug 12 23:46:32 addons-20210812233805-198261 kubelet[2478]: I0812 23:46:32.984066 2478 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
Aug 12 23:46:43 addons-20210812233805-198261 kubelet[2478]: I0812 23:46:43.276522 2478 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Aug 12 23:46:55 addons-20210812233805-198261 kubelet[2478]: I0812 23:46:55.277344 2478 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-rrdjz" secret="" err="secret \"gcp-auth\" not found"
*
* ==> storage-provisioner [0566e40e5b99] <==
* I0812 23:38:53.843339 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0812 23:38:53.951511 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0812 23:38:53.951572 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0812 23:38:54.043683 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0812 23:38:54.043863 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210812233805-198261_a7d0e3f0-7af4-4f29-979e-2c27ccdacae0!
I0812 23:38:54.044958 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8634684a-0bc4-4047-bac6-07ab9dcca4aa", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210812233805-198261_a7d0e3f0-7af4-4f29-979e-2c27ccdacae0 became leader
I0812 23:38:54.144014 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210812233805-198261_a7d0e3f0-7af4-4f29-979e-2c27ccdacae0!
-- /stdout --
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210812233805-198261 -n addons-20210812233805-198261
helpers_test.go:262: (dbg) Run: kubectl --context addons-20210812233805-198261 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: task-pv-pod-restore
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:276: (dbg) Run: kubectl --context addons-20210812233805-198261 describe pod task-pv-pod-restore
helpers_test.go:281: (dbg) kubectl --context addons-20210812233805-198261 describe pod task-pv-pod-restore:
-- stdout --
Name: task-pv-pod-restore
Namespace: default
Priority: 0
Node: addons-20210812233805-198261/192.168.49.2
Start Time: Thu, 12 Aug 2021 23:40:55 +0000
Labels: app=task-pv-pod-restore
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
task-pv-container:
Container ID:
Image: nginx
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: k8s-minikube
GCP_PROJECT: k8s-minikube
GCLOUD_PROJECT: k8s-minikube
GOOGLE_CLOUD_PROJECT: k8s-minikube
CLOUDSDK_CORE_PROJECT: k8s-minikube
Mounts:
/google-app-creds.json from gcp-creds (ro)
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n9bg8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
task-pv-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hpvc-restore
ReadOnly: false
kube-api-access-n9bg8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m3s default-scheduler Successfully assigned default/task-pv-pod-restore to addons-20210812233805-198261
Normal SuccessfulAttachVolume 6m2s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-884489b9-6b33-4e08-8e9d-b25a55ce9e62"
Warning FailedMount 4m kubelet Unable to attach or mount volumes: unmounted volumes=[gcp-creds], unattached volumes=[gcp-creds task-pv-storage kube-api-access-n9bg8]: timed out waiting for the condition
Warning FailedMount 112s (x10 over 6m2s) kubelet MountVolume.SetUp failed for volume "gcp-creds" : hostPath type check failed: /var/lib/minikube/google_application_credentials.json is not a file
Warning FailedMount 103s kubelet Unable to attach or mount volumes: unmounted volumes=[gcp-creds], unattached volumes=[kube-api-access-n9bg8 gcp-creds task-pv-storage]: timed out waiting for the condition
-- /stdout --
helpers_test.go:284: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:285: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CSI (399.46s)