W1113 23:06:01.565] ************************************************************************** bootstrap.py is deprecated! test-infra oncall does not support any job still using bootstrap.py. Please migrate your job to podutils! https://github.com/kubernetes/test-infra/blob/master/prow/pod-utilities.md ************************************************************************** I1113 23:06:01.566] Args: --job=chewong-pull-in-tree-azure-disk-e2e-vmss --service-account=/etc/service-account/service-account.json --upload=gs://kubernetes-jenkins/logs --job=chewong-pull-in-tree-azure-disk-e2e-vmss --root=/go/src --service-account=/etc/service-account/service-account.json --repo=k8s.io/kubernetes --repo=sigs.k8s.io/azuredisk-csi-driver=master:a6a3e9b4c8eb15f983b9cfc34935ebd72f18b698,188:463a1299807122c7d9e078a5bb09bb13492c20cf --upload=gs://kubernetes-upstream/pr-logs --timeout=460 --scenario=kubernetes_e2e -- --gce-ssh= --test=true --up=true --down=true --deployment=acsengine --build=host-go --provider=skeleton --ginkgo-parallel=1 --acsengine-agentpoolcount=2 --acsengine-admin-username=azureuser '--acsengine-creds=$AZURE_CREDENTIALS' --acsengine-orchestratorRelease=1.16 --acsengine-mastervmsize=Standard_DS2_v2 --acsengine-agentvmsize=Standard_D4s_v3 --acsengine-hyperkube=True --acsengine-location=westus2 '--acsengine-public-key=$AZURE_SSH_PUBLIC_KEY_FILE' --acsengine-template-url=https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/test/e2e/manifest/in-tree-vmss.json --acsengine-download-url=https://chuwontest.blob.core.windows.net/aks-engine/aks-engine-8cf7c2a74-linux-amd64.tar.gz --test-azure-disk-csi-driver=True --timeout=420m I1113 23:06:01.566] Bootstrap chewong-pull-in-tree-azure-disk-e2e-vmss... I1113 23:06:01.570] Builder: 16d33591-066a-11ea-918b-16773c0c7aed I1113 23:06:01.570] Image: gcr.io/k8s-testimages/kubekins-e2e:v20191112-9f04410-master I1113 23:06:01.570] Gubernator results at https://gubernator.k8s.io/build/kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e-vmss/1194753232421785601 I1113 23:06:01.571] Call: gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json W1113 23:06:02.094] Activated service account credentials for: [kubernetes-upstream@kubernetes-upstream.iam.gserviceaccount.com] I1113 23:06:02.126] process 189 exited with code 0 after 0.0m I1113 23:06:02.127] Call: gcloud config get-value account I1113 23:06:02.378] process 201 exited with code 0 after 0.0m I1113 23:06:02.378] Will upload results to gs://kubernetes-upstream/pr-logs using kubernetes-upstream@kubernetes-upstream.iam.gserviceaccount.com I1113 23:06:02.379] Root: /go/src I1113 23:06:02.379] cd to /go/src I1113 23:06:02.379] Checkout: /go/src/k8s.io/kubernetes master to /go/src/k8s.io/kubernetes I1113 23:06:02.379] Call: git init k8s.io/kubernetes I1113 23:06:02.383] Initialized empty Git repository in /go/src/k8s.io/kubernetes/.git/ I1113 23:06:02.384] process 213 exited with code 0 after 0.0m I1113 23:06:02.384] Call: git config --local user.name 'K8S Bootstrap' I1113 23:06:02.386] process 214 exited with code 0 after 0.0m I1113 23:06:02.387] Call: git config --local user.email k8s_bootstrap@localhost I1113 23:06:02.389] process 215 exited with code 0 after 0.0m I1113 23:06:02.389] Call: git fetch --quiet --tags https://github.com/kubernetes/kubernetes master I1113 23:07:10.585] process 216 exited with code 0 after 1.1m I1113 23:07:10.586] Call: git checkout -B test FETCH_HEAD W1113 23:07:12.927] Switched to a new branch 'test' I1113 23:07:12.936] process 227 exited with code 0 after 0.0m I1113 23:07:12.936] Call: git show -s --format=format:%ct HEAD I1113 23:07:12.948] process 228 exited with code 0 after 0.0m I1113 23:07:12.948] Checkout: /go/src/sigs.k8s.io/azuredisk-csi-driver master:a6a3e9b4c8eb15f983b9cfc34935ebd72f18b698,188:463a1299807122c7d9e078a5bb09bb13492c20cf to /go/src/sigs.k8s.io/azuredisk-csi-driver I1113 23:07:12.948] Call: git init sigs.k8s.io/azuredisk-csi-driver I1113 23:07:12.952] Initialized empty Git repository in /go/src/sigs.k8s.io/azuredisk-csi-driver/.git/ I1113 23:07:12.952] process 229 exited with code 0 after 0.0m I1113 23:07:12.952] Call: git config --local user.name 'K8S Bootstrap' I1113 23:07:12.954] process 230 exited with code 0 after 0.0m I1113 23:07:12.955] Call: git config --local user.email k8s_bootstrap@localhost I1113 23:07:12.957] process 231 exited with code 0 after 0.0m I1113 23:07:12.957] Call: git fetch --quiet --tags https://github.com/kubernetes-sigs/azuredisk-csi-driver master +refs/pull/188/head:refs/pr/188 I1113 23:07:19.332] process 232 exited with code 0 after 0.1m I1113 23:07:19.333] Call: git checkout -B test a6a3e9b4c8eb15f983b9cfc34935ebd72f18b698 W1113 23:07:20.500] Switched to a new branch 'test' I1113 23:07:20.503] process 243 exited with code 0 after 0.0m I1113 23:07:20.503] Call: git show -s --format=format:%ct HEAD I1113 23:07:20.506] process 244 exited with code 0 after 0.0m I1113 23:07:20.507] Call: git merge --no-ff -m 'Merge +refs/pull/188/head:refs/pr/188' 463a1299807122c7d9e078a5bb09bb13492c20cf I1113 23:07:20.787] Merge made by the 'recursive' strategy. I1113 23:07:20.790] ...{safe_mounter_linux.go => safe_mounter_unix.go} | 2 +- I1113 23:07:20.790] test/e2e/README.md | 9 +- I1113 23:07:20.790] test/e2e/driver/azuredisk_csi_driver.go | 23 +++- I1113 23:07:20.790] test/e2e/suite_test.go | 119 +++++++++++---------- I1113 23:07:20.790] 4 files changed, 89 insertions(+), 64 deletions(-) I1113 23:07:20.791] rename pkg/mounter/{safe_mounter_linux.go => safe_mounter_unix.go} (97%) I1113 23:07:20.791] process 245 exited with code 0 after 0.0m I1113 23:07:20.791] Configure environment... I1113 23:07:20.791] Call: git show -s --format=format:%ct HEAD I1113 23:07:20.803] process 261 exited with code 0 after 0.0m I1113 23:07:20.803] Call: gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json W1113 23:07:21.485] Activated service account credentials for: [kubernetes-upstream@kubernetes-upstream.iam.gserviceaccount.com] I1113 23:07:21.608] process 262 exited with code 0 after 0.0m I1113 23:07:21.608] Call: gcloud config get-value account I1113 23:07:21.828] process 274 exited with code 0 after 0.0m I1113 23:07:21.828] Will upload results to gs://kubernetes-upstream/pr-logs using kubernetes-upstream@kubernetes-upstream.iam.gserviceaccount.com I1113 23:07:21.828] Call: bash -c ' set -o errexit set -o nounset export KUBE_ROOT=. source hack/lib/version.sh kube::version::get_version_vars echo $KUBE_GIT_VERSION ' I1113 23:07:22.741] process 286 exited with code 0 after 0.0m I1113 23:07:22.742] Start 1194753232421785601 at v1.18.0-alpha.0.840+8af6906d1f300f... I1113 23:07:22.745] Call: gsutil -q -h Content-Type:application/json cp /tmp/gsutil_fOCoDA gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e-vmss/1194753232421785601/started.json I1113 23:07:24.042] process 319 exited with code 0 after 0.0m I1113 23:07:24.043] Call: gsutil -q -h Content-Type:text/plain -h 'x-goog-meta-link: gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e-vmss/1194753232421785601' cp /tmp/gsutil_Cb7E0e gs://kubernetes-upstream/pr-logs/directory/chewong-pull-in-tree-azure-disk-e2e-vmss/1194753232421785601.txt I1113 23:07:25.256] process 499 exited with code 0 after 0.0m I1113 23:07:25.257] Call: /workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py --gce-ssh= --test=true --up=true --down=true --deployment=acsengine --build=host-go --provider=skeleton --ginkgo-parallel=1 --acsengine-agentpoolcount=2 --acsengine-admin-username=azureuser --acsengine-creds=/etc/azure-cred/credentials --acsengine-orchestratorRelease=1.16 --acsengine-mastervmsize=Standard_DS2_v2 --acsengine-agentvmsize=Standard_D4s_v3 --acsengine-hyperkube=True --acsengine-location=westus2 --acsengine-public-key=/etc/azure-ssh/azure-ssh-pub --acsengine-template-url=https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/test/e2e/manifest/in-tree-vmss.json --acsengine-download-url=https://chuwontest.blob.core.windows.net/aks-engine/aks-engine-8cf7c2a74-linux-amd64.tar.gz --test-azure-disk-csi-driver=True --timeout=420m W1113 23:07:25.293] starts with local mode W1113 23:07:25.293] Environment: W1113 23:07:25.293] ARTIFACTS=/workspace/_artifacts W1113 23:07:25.293] AWS_SSH_PRIVATE_KEY_FILE=/root/.ssh/kube_aws_rsa W1113 23:07:25.294] AWS_SSH_PUBLIC_KEY_FILE=/root/.ssh/kube_aws_rsa.pub W1113 23:07:25.294] AZURE_CREDENTIALS=/etc/azure-cred/credentials W1113 23:07:25.294] AZURE_SSH_PUBLIC_KEY_FILE=/etc/azure-ssh/azure-ssh-pub W1113 23:07:25.294] AZURE_STORAGE_DRIVER=kubernetes.io/azure-disk W1113 23:07:25.294] AZ_STORAGE_CONTAINER_NAME=mystoragecontainer W1113 23:07:25.294] BAZEL_REMOTE_CACHE_ENABLED=false W1113 23:07:25.294] BAZEL_VERSION=0.23.2 W1113 23:07:25.295] BOOTSTRAP_MIGRATION=yes W1113 23:07:25.295] BUILD_ID=1194753232421785601 W1113 23:07:25.295] BUILD_NUMBER=1194753232421785601 W1113 23:07:25.295] CLOUDSDK_COMPONENT_MANAGER_DISABLE_UPDATE_CHECK=true W1113 23:07:25.295] CLOUDSDK_CONFIG=/go/src/k8s.io/kubernetes/.config/gcloud W1113 23:07:25.295] CLOUDSDK_CORE_DISABLE_PROMPTS=1 W1113 23:07:25.295] CLOUDSDK_EXPERIMENTAL_FAST_COMPONENT_UPDATE=false W1113 23:07:25.295] CLOUD_CONFIG=random W1113 23:07:25.296] DECK_PORT=tcp://10.0.160.83:80 W1113 23:07:25.296] DECK_PORT_80_TCP=tcp://10.0.160.83:80 W1113 23:07:25.296] DECK_PORT_80_TCP_ADDR=10.0.160.83 W1113 23:07:25.296] DECK_PORT_80_TCP_PORT=80 W1113 23:07:25.296] DECK_PORT_80_TCP_PROTO=tcp W1113 23:07:25.296] DECK_SERVICE_HOST=10.0.160.83 W1113 23:07:25.296] DECK_SERVICE_PORT=80 W1113 23:07:25.296] DOCKER_IN_DOCKER_ENABLED=true W1113 23:07:25.297] DOCKER_IN_DOCKER_IPV6_ENABLED=false W1113 23:07:25.297] GCS_ARTIFACTS_DIR=gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e-vmss/1194753232421785601/artifacts W1113 23:07:25.297] GOOGLE_APPLICATION_CREDENTIALS=/etc/service-account/service-account.json W1113 23:07:25.297] GOPATH=/go W1113 23:07:25.297] GO_TARBALL=go1.12.12.linux-amd64.tar.gz W1113 23:07:25.297] HOME=/workspace W1113 23:07:25.297] HOOK_PORT=tcp://10.0.144.133:8888 W1113 23:07:25.298] HOOK_PORT_8888_TCP=tcp://10.0.144.133:8888 W1113 23:07:25.298] HOOK_PORT_8888_TCP_ADDR=10.0.144.133 W1113 23:07:25.298] HOOK_PORT_8888_TCP_PORT=8888 W1113 23:07:25.298] HOOK_PORT_8888_TCP_PROTO=tcp W1113 23:07:25.298] HOOK_SERVICE_HOST=10.0.144.133 W1113 23:07:25.298] HOOK_SERVICE_PORT=8888 W1113 23:07:25.298] HOSTNAME=16d33591-066a-11ea-918b-16773c0c7aed W1113 23:07:25.299] IMAGE=gcr.io/k8s-testimages/kubekins-e2e:v20191112-9f04410-master W1113 23:07:25.299] INSTANCE_PREFIX=bootstrap-e2e W1113 23:07:25.299] JENKINS_GCE_SSH_PRIVATE_KEY_FILE=/root/.ssh/google_compute_engine W1113 23:07:25.299] JENKINS_GCE_SSH_PUBLIC_KEY_FILE=/root/.ssh/google_compute_engine.pub W1113 23:07:25.299] JOB_NAME=chewong-pull-in-tree-azure-disk-e2e-vmss W1113 23:07:25.300] JOB_SPEC={"type":"presubmit","job":"chewong-pull-in-tree-azure-disk-e2e-vmss","buildid":"1194753232421785601","prowjobid":"16d33591-066a-11ea-918b-16773c0c7aed","refs":{"org":"Azure","repo":"kubernetes","repo_link":"https://github.com/Azure/kubernetes","base_ref":"master","base_sha":"8af6906d1f300ff739ac1a846d4108d134412103","base_link":"https://github.com/Azure/kubernetes/commit/8af6906d1f300ff739ac1a846d4108d134412103","pulls":[{"number":113,"author":"chewong","sha":"879465972041635fd7bc8269da49153b67e63822","link":"https://github.com/Azure/kubernetes/pull/113","commit_link":"https://github.com/Azure/kubernetes/pull/113/commits/879465972041635fd7bc8269da49153b67e63822","author_link":"https://github.com/chewong"}]}} W1113 23:07:25.300] JOB_TYPE=presubmit W1113 23:07:25.300] KUBERNETES_PORT=tcp://10.0.0.1:443 W1113 23:07:25.300] KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443 W1113 23:07:25.300] KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1 W1113 23:07:25.300] KUBERNETES_PORT_443_TCP_PORT=443 W1113 23:07:25.300] KUBERNETES_PORT_443_TCP_PROTO=tcp W1113 23:07:25.301] KUBERNETES_SERVICE_HOST=10.0.0.1 W1113 23:07:25.301] KUBERNETES_SERVICE_PORT=443 W1113 23:07:25.301] KUBERNETES_SERVICE_PORT_HTTPS=443 W1113 23:07:25.301] KUBETEST_IN_DOCKER=true W1113 23:07:25.301] KUBETEST_MANUAL_DUMP=y W1113 23:07:25.301] KUBE_AWS_INSTANCE_PREFIX=bootstrap-e2e W1113 23:07:25.301] KUBE_GCE_INSTANCE_PREFIX=bootstrap-e2e W1113 23:07:25.302] KUBE_SSH_KEY_PATH=/etc/ssh-key-secret/ssh-private W1113 23:07:25.302] KUBE_SSH_PUBLIC_KEY_PATH=/etc/ssh-key-secret/ssh-public W1113 23:07:25.302] KUBE_SSH_USER=azureuser W1113 23:07:25.302] KUBE_VERBOSE=0 W1113 23:07:25.302] NODE_NAME=16d33591-066a-11ea-918b-16773c0c7aed W1113 23:07:25.302] PATH=/go/bin:/go/bin:/usr/local/go/bin:/google-cloud-sdk/bin:/workspace:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin W1113 23:07:25.302] PROW_JOB_ID=16d33591-066a-11ea-918b-16773c0c7aed W1113 23:07:25.302] PULL_BASE_REF=master W1113 23:07:25.303] PULL_BASE_SHA=8af6906d1f300ff739ac1a846d4108d134412103 W1113 23:07:25.303] PULL_NUMBER=113 W1113 23:07:25.303] PULL_PULL_SHA=879465972041635fd7bc8269da49153b67e63822 W1113 23:07:25.303] PULL_REFS=master:8af6906d1f300ff739ac1a846d4108d134412103,113:879465972041635fd7bc8269da49153b67e63822 W1113 23:07:25.303] PWD=/workspace W1113 23:07:25.303] REGISTRY=k8sprowinternal.azurecr.io W1113 23:07:25.304] REPO_NAME=kubernetes W1113 23:07:25.304] REPO_OWNER=Azure W1113 23:07:25.304] SHLVL=2 W1113 23:07:25.304] SOURCE_DATE_EPOCH=1573685531 W1113 23:07:25.304] TERM=xterm W1113 23:07:25.304] TIDE_PORT=tcp://10.0.180.53:80 W1113 23:07:25.304] TIDE_PORT_80_TCP=tcp://10.0.180.53:80 W1113 23:07:25.304] TIDE_PORT_80_TCP_ADDR=10.0.180.53 W1113 23:07:25.304] TIDE_PORT_80_TCP_PORT=80 W1113 23:07:25.304] TIDE_PORT_80_TCP_PROTO=tcp W1113 23:07:25.304] TIDE_SERVICE_HOST=10.0.180.53 W1113 23:07:25.305] TIDE_SERVICE_PORT=80 W1113 23:07:25.305] WORKSPACE=/workspace W1113 23:07:25.305] _=./test-infra/jenkins/bootstrap.py W1113 23:07:25.306] Run: ('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=host-go', '--up', '--down', '--test', '--deployment=acsengine', '--provider=skeleton', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--ginkgo-parallel=1', '--acsengine-agentpoolcount=2', '--acsengine-admin-username=azureuser', '--acsengine-creds=/etc/azure-cred/credentials', '--acsengine-orchestratorRelease=1.16', '--acsengine-mastervmsize=Standard_DS2_v2', '--acsengine-agentvmsize=Standard_D4s_v3', '--acsengine-hyperkube=True', '--acsengine-location=westus2', '--acsengine-public-key=/etc/azure-ssh/azure-ssh-pub', '--acsengine-template-url=https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/test/e2e/manifest/in-tree-vmss.json', '--acsengine-download-url=https://chuwontest.blob.core.windows.net/aks-engine/aks-engine-8cf7c2a74-linux-amd64.tar.gz', '--test-azure-disk-csi-driver=True', '--timeout=420m') W1113 23:07:25.435] 2019/11/13 23:07:25 main.go:332: Limiting testing to 7h0m0s W1113 23:07:25.436] 2019/11/13 23:07:25 azure.go:232: Reading credentials file /etc/azure-cred/credentials W1113 23:07:25.438] 2019/11/13 23:07:25 process.go:153: Running: make -C /go/src/k8s.io/kubernetes all I1113 23:07:25.538] make: Entering directory '/go/src/k8s.io/kubernetes' I1113 23:07:25.538] make[1]: Entering directory '/go/src/k8s.io/kubernetes' I1113 23:07:34.412] +++ [1113 23:07:34] Building go targets for linux/amd64: I1113 23:07:34.413] ./vendor/k8s.io/code-generator/cmd/deepcopy-gen I1113 23:07:43.149] +++ [1113 23:07:43] Building go targets for linux/amd64: I1113 23:07:43.149] ./vendor/k8s.io/code-generator/cmd/defaulter-gen I1113 23:07:53.494] +++ [1113 23:07:53] Building go targets for linux/amd64: I1113 23:07:53.494] ./vendor/k8s.io/code-generator/cmd/conversion-gen I1113 23:08:05.598] +++ [1113 23:08:05] Building go targets for linux/amd64: I1113 23:08:05.599] ./vendor/k8s.io/kube-openapi/cmd/openapi-gen I1113 23:08:18.995] +++ [1113 23:08:18] Building go targets for linux/amd64: I1113 23:08:18.996] ./vendor/github.com/go-bindata/go-bindata/go-bindata I1113 23:08:20.254] make[1]: Leaving directory '/go/src/k8s.io/kubernetes' I1113 23:08:20.557] +++ [1113 23:08:20] Building go targets for linux/amd64: I1113 23:08:20.557] cmd/kube-proxy I1113 23:08:20.557] cmd/kube-apiserver I1113 23:08:20.557] cmd/kube-controller-manager I1113 23:08:20.557] cmd/kubelet I1113 23:08:20.558] cmd/kubeadm I1113 23:08:20.558] cmd/kube-scheduler I1113 23:08:20.558] vendor/k8s.io/apiextensions-apiserver I1113 23:08:20.558] cluster/gce/gci/mounter I1113 23:08:20.558] cmd/kubectl I1113 23:08:20.558] cmd/gendocs I1113 23:08:20.558] cmd/genkubedocs I1113 23:08:20.558] cmd/genman I1113 23:08:20.559] cmd/genyaml I1113 23:08:20.559] cmd/genswaggertypedocs I1113 23:08:20.559] cmd/linkcheck I1113 23:08:20.559] vendor/github.com/onsi/ginkgo/ginkgo I1113 23:08:20.559] test/e2e/e2e.test I1113 23:08:20.559] cluster/images/conformance/go-runner I1113 23:08:20.559] cmd/kubemark I1113 23:08:20.559] vendor/github.com/onsi/ginkgo/ginkgo I1113 23:08:20.560] test/e2e_node/e2e_node.test I1113 23:16:06.746] make: Leaving directory '/go/src/k8s.io/kubernetes' W1113 23:16:06.847] 2019/11/13 23:16:06 process.go:155: Step 'make -C /go/src/k8s.io/kubernetes all' finished in 8m41.308884956s W1113 23:16:06.847] 2019/11/13 23:16:06 util.go:265: Flushing memory. W1113 23:16:42.570] 2019/11/13 23:16:42 azure.go:916: Deleting resource group: kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d. W1113 23:16:43.022] 2019/11/13 23:16:43 azure.go:654: CWD /go/src/k8s.io/kubernetes W1113 23:16:43.022] 2019/11/13 23:16:43 azure.go:673: Attempting Docker login with azure cred. W1113 23:16:44.344] 2019/11/13 23:16:44 azure.go:682: Docker login success. W1113 23:16:44.345] 2019/11/13 23:16:44 azure.go:727: Building hyperkube. W1113 23:16:44.345] 2019/11/13 23:16:44 util.go:56: Warning: Couldn't find directory src/k8s.io/kubernetes/cmd/hyperkube under any of GOPATH /go, defaulting to /go/src/k8s.io/kubernetes/cmd/hyperkube W1113 23:16:44.345] 2019/11/13 23:16:44 azure.go:745: Pushing hyperkube. W1113 23:16:44.346] 2019/11/13 23:16:44 process.go:153: Running: make -C /go/src/k8s.io/kubernetes/cluster/images/hyperkube push I1113 23:16:44.446] make: Entering directory '/go/src/k8s.io/kubernetes/cluster/images/hyperkube' I1113 23:16:44.446] cp -r ./* /tmp/hyperkube6dGd2k I1113 23:16:44.446] tar -cvzf /tmp/hyperkube6dGd2k/binaries.tgz -C /go/src/k8s.io/kubernetes/cluster/images/hyperkube/../../../_output/local/bin/linux/amd64 kube-apiserver kube-controller-manager \ I1113 23:16:44.447] kube-proxy kube-scheduler kubectl kubelet I1113 23:16:44.447] kube-apiserver I1113 23:16:49.468] kube-controller-manager I1113 23:16:54.349] kube-proxy I1113 23:16:56.215] kube-scheduler I1113 23:16:58.333] kubectl I1113 23:17:00.514] kubelet I1113 23:17:05.478] chmod a+rx /tmp/hyperkube6dGd2k/hyperkube I1113 23:17:05.479] cd /tmp/hyperkube6dGd2k && sed -i.back "s|BASEIMAGE|k8s.gcr.io/debian-hyperkube-base-amd64:0.12.1|g" Dockerfile I1113 23:17:05.482] docker build --pull -t k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194753232421785601-a7885bd7 /tmp/hyperkube6dGd2k I1113 23:17:06.344] Sending build context to Docker daemon 122MB I1113 23:17:06.366] Step 1/4 : FROM k8s.gcr.io/debian-hyperkube-base-amd64:0.12.1 I1113 23:17:06.678] 0.12.1: Pulling from debian-hyperkube-base-amd64 I1113 23:17:06.679] 346aee5ea5bc: Pulling fs layer I1113 23:17:06.679] 9c47fde751af: Pulling fs layer I1113 23:17:06.679] be2693a52daf: Pulling fs layer I1113 23:17:06.679] 6b69eb11d043: Pulling fs layer I1113 23:17:06.679] 0abeb1500767: Pulling fs layer I1113 23:17:06.680] 4062d80041b7: Pulling fs layer I1113 23:17:06.680] 23b6daf06fc2: Pulling fs layer I1113 23:17:06.680] 4062d80041b7: Waiting I1113 23:17:06.680] 6b69eb11d043: Waiting I1113 23:17:06.680] 0abeb1500767: Waiting I1113 23:17:06.680] 23b6daf06fc2: Waiting I1113 23:17:06.900] 9c47fde751af: Download complete I1113 23:17:06.994] be2693a52daf: Verifying Checksum I1113 23:17:06.994] be2693a52daf: Download complete I1113 23:17:07.037] 6b69eb11d043: Verifying Checksum I1113 23:17:07.037] 6b69eb11d043: Download complete I1113 23:17:07.058] 346aee5ea5bc: Verifying Checksum I1113 23:17:07.058] 346aee5ea5bc: Download complete I1113 23:17:07.192] 0abeb1500767: Verifying Checksum I1113 23:17:07.193] 0abeb1500767: Download complete I1113 23:17:07.554] 23b6daf06fc2: Verifying Checksum I1113 23:17:07.554] 23b6daf06fc2: Download complete I1113 23:17:08.367] 4062d80041b7: Verifying Checksum I1113 23:17:08.367] 4062d80041b7: Download complete I1113 23:17:08.481] 346aee5ea5bc: Pull complete I1113 23:17:08.672] 9c47fde751af: Pull complete I1113 23:17:08.946] be2693a52daf: Pull complete I1113 23:17:09.131] 6b69eb11d043: Pull complete I1113 23:17:09.341] 0abeb1500767: Pull complete I1113 23:17:16.043] 4062d80041b7: Pull complete I1113 23:17:19.544] 23b6daf06fc2: Pull complete I1113 23:17:19.591] Digest: sha256:5d4ea2fb5fbe9a9a9da74f67cf2faefc881968bc39f2ac5d62d9167e575812a1 I1113 23:17:19.624] Status: Downloaded newer image for k8s.gcr.io/debian-hyperkube-base-amd64:0.12.1 I1113 23:17:19.625] ---> a46476511725 I1113 23:17:19.625] Step 2/4 : ADD binaries.tgz /usr/local/bin I1113 23:18:00.959] ---> 43a144a3332b I1113 23:18:00.959] Step 3/4 : COPY hyperkube /hyperkube I1113 23:18:02.090] ---> 87459c208634 I1113 23:18:02.091] Step 4/4 : ENTRYPOINT ["/hyperkube"] I1113 23:18:02.286] ---> Running in 689a327e783f I1113 23:18:03.109] Removing intermediate container 689a327e783f I1113 23:18:03.109] ---> 0115ddb35bcb I1113 23:18:03.126] Successfully built 0115ddb35bcb I1113 23:18:03.264] Successfully tagged k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194753232421785601-a7885bd7 I1113 23:18:03.267] rm -rf "/tmp/hyperkube6dGd2k" I1113 23:18:03.285] docker push k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194753232421785601-a7885bd7 I1113 23:18:03.350] The push refers to repository [k8sprowinternal.azurecr.io/hyperkube-amd64] I1113 23:18:03.362] 17a2ad1bc805: Preparing I1113 23:18:03.363] f312e4118340: Preparing I1113 23:18:03.363] 39b3ac6d96e9: Preparing I1113 23:18:03.363] 7bbae4dddb88: Preparing I1113 23:18:03.363] a00defcfe869: Preparing I1113 23:18:03.363] 2ab0ae805c74: Preparing I1113 23:18:03.363] 43a8fe7d2382: Preparing I1113 23:18:03.363] 3f6a6f542637: Preparing I1113 23:18:03.364] 5ba3be777c2d: Preparing I1113 23:18:03.364] 2ab0ae805c74: Waiting I1113 23:18:03.364] 43a8fe7d2382: Waiting I1113 23:18:03.364] 5ba3be777c2d: Waiting I1113 23:18:03.564] 39b3ac6d96e9: Layer already exists I1113 23:18:03.612] 7bbae4dddb88: Layer already exists I1113 23:18:03.660] a00defcfe869: Layer already exists I1113 23:18:03.722] 2ab0ae805c74: Layer already exists I1113 23:18:03.779] 43a8fe7d2382: Layer already exists I1113 23:18:03.823] 3f6a6f542637: Layer already exists I1113 23:18:03.860] 17a2ad1bc805: Pushed I1113 23:18:03.879] 5ba3be777c2d: Layer already exists I1113 23:18:28.919] f312e4118340: Pushed I1113 23:18:30.028] azure-e2e-1194753232421785601-a7885bd7: digest: sha256:2cf617ec5812780ff2d2908a03888bbbbb3ff3816c91c32baab228dc4b039cb7 size: 2206 I1113 23:18:30.045] docker rmi k8sprowinternal.azurecr.io/hyperkube:azure-e2e-1194753232421785601-a7885bd7 2>/dev/null || true I1113 23:18:30.101] docker tag k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194753232421785601-a7885bd7 k8sprowinternal.azurecr.io/hyperkube:azure-e2e-1194753232421785601-a7885bd7 I1113 23:18:30.184] docker push k8sprowinternal.azurecr.io/hyperkube:azure-e2e-1194753232421785601-a7885bd7 I1113 23:18:30.250] The push refers to repository [k8sprowinternal.azurecr.io/hyperkube] I1113 23:18:30.263] 17a2ad1bc805: Preparing I1113 23:18:30.263] f312e4118340: Preparing I1113 23:18:30.264] 39b3ac6d96e9: Preparing I1113 23:18:30.264] 7bbae4dddb88: Preparing I1113 23:18:30.264] a00defcfe869: Preparing I1113 23:18:30.264] 2ab0ae805c74: Preparing I1113 23:18:30.264] 43a8fe7d2382: Preparing I1113 23:18:30.264] 3f6a6f542637: Preparing I1113 23:18:30.264] 5ba3be777c2d: Preparing I1113 23:18:30.264] 2ab0ae805c74: Waiting I1113 23:18:30.264] 43a8fe7d2382: Waiting I1113 23:18:30.265] 3f6a6f542637: Waiting I1113 23:18:30.265] 5ba3be777c2d: Waiting I1113 23:18:30.320] a00defcfe869: Layer already exists I1113 23:18:30.342] 7bbae4dddb88: Layer already exists I1113 23:18:30.365] 39b3ac6d96e9: Layer already exists I1113 23:18:30.388] 2ab0ae805c74: Layer already exists I1113 23:18:30.432] 17a2ad1bc805: Layer already exists I1113 23:18:30.444] 43a8fe7d2382: Layer already exists I1113 23:18:30.466] 3f6a6f542637: Layer already exists I1113 23:18:30.476] f312e4118340: Mounted from hyperkube-amd64 I1113 23:18:30.477] 5ba3be777c2d: Layer already exists I1113 23:18:33.692] azure-e2e-1194753232421785601-a7885bd7: digest: sha256:2cf617ec5812780ff2d2908a03888bbbbb3ff3816c91c32baab228dc4b039cb7 size: 2206 I1113 23:18:33.781] make: Leaving directory '/go/src/k8s.io/kubernetes/cluster/images/hyperkube' W1113 23:18:33.882] 2019/11/13 23:18:33 process.go:155: Step 'make -C /go/src/k8s.io/kubernetes/cluster/images/hyperkube push' finished in 1m49.436600747s W1113 23:18:33.882] 2019/11/13 23:18:33 azure.go:686: Docker logout. W1113 23:18:33.887] 2019/11/13 23:18:33 azure.go:754: Custom hyperkube URL: k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194753232421785601-a7885bd7 . W1113 23:18:33.888] 2019/11/13 23:18:33 azure.go:797: downloading /workspace/aks843422017/kubernetes.json from https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/test/e2e/manifest/in-tree-vmss.json. W1113 23:18:33.888] 2019/11/13 23:18:33 util.go:42: curl https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/test/e2e/manifest/in-tree-vmss.json W1113 23:18:34.061] 2019/11/13 23:18:34 azure.go:529: downloading /workspace/aks-engine.tar.gz from https://chuwontest.blob.core.windows.net/aks-engine/aks-engine-8cf7c2a74-linux-amd64.tar.gz. W1113 23:18:34.062] 2019/11/13 23:18:34 util.go:42: curl https://chuwontest.blob.core.windows.net/aks-engine/aks-engine-8cf7c2a74-linux-amd64.tar.gz W1113 23:18:34.732] 2019/11/13 23:18:34 azure.go:556: Extracting tar file /workspace/aks-engine.tar.gz into directory /go/src/k8s.io/kubernetes . W1113 23:18:34.732] 2019/11/13 23:18:34 process.go:153: Running: tar -xzf /workspace/aks-engine.tar.gz --strip 1 W1113 23:18:34.891] 2019/11/13 23:18:34 process.go:155: Step 'tar -xzf /workspace/aks-engine.tar.gz --strip 1' finished in 158.97153ms W1113 23:18:34.892] 2019/11/13 23:18:34 process.go:153: Running: /go/src/k8s.io/kubernetes/aks-engine generate /workspace/aks843422017/kubernetes.json --output-directory /workspace/aks843422017 I1113 23:18:35.163] INFO[0000] Generating assets into /workspace/aks843422017... W1113 23:18:42.754] 2019/11/13 23:18:42 process.go:155: Step '/go/src/k8s.io/kubernetes/aks-engine generate /workspace/aks843422017/kubernetes.json --output-directory /workspace/aks843422017' finished in 7.862819848s W1113 23:18:42.756] 2019/11/13 23:18:42 azure.go:627: Setting kubeconfig env variable: kubeconfig path: /workspace/aks843422017/kubeconfig/kubeconfig.westus2.json. W1113 23:18:42.757] 2019/11/13 23:18:42 azure.go:629: Creating resource group: kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d. W1113 23:18:42.757] 2019/11/13 23:18:42 azure.go:631: Creating Azure resource group: kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d for cluster deployment. W1113 23:18:42.930] 2019/11/13 23:18:42 azure.go:636: Validating deployment ARM templates. W1113 23:18:43.558] 2019/11/13 23:18:43 azure.go:642: Deploying cluster kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d in resource group kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d. W1113 23:26:44.424] 2019/11/13 23:26:44 process.go:153: Running: kubectl --match-server-version=false version W1113 23:26:44.690] 2019/11/13 23:26:44 process.go:155: Step 'kubectl --match-server-version=false version' finished in 261.191623ms W1113 23:26:44.690] 2019/11/13 23:26:44 process.go:153: Running: kubectl --match-server-version=false get nodes -oyaml W1113 23:26:45.962] 2019/11/13 23:26:45 process.go:155: Step 'kubectl --match-server-version=false get nodes -oyaml' finished in 1.273957867s W1113 23:26:45.962] 2019/11/13 23:26:45 azure.go:980: KUBE_TEST_REPO_LIST_DOWNLOAD_LOCATION not set. Using default test image repos. W1113 23:26:45.962] 2019/11/13 23:26:45 azure.go:980: KUBE_TEST_REPO_LIST_DOWNLOAD_LOCATION not set. Using default test image repos. W1113 23:26:45.962] 2019/11/13 23:26:45 process.go:153: Running: kubectl get nodes --no-headers W1113 23:26:46.384] 2019/11/13 23:26:46 process.go:155: Step 'kubectl get nodes --no-headers' finished in 423.310379ms W1113 23:26:46.384] 2019/11/13 23:26:46 e2e.go:474: Cluster nodes: W1113 23:26:46.386] k8s-agentpool1-44273024-vmss000000 Ready 46s v1.18.0-alpha.0.840+8af6906d1f300f W1113 23:26:46.386] k8s-agentpool1-44273024-vmss000001 Ready 46s v1.18.0-alpha.0.840+8af6906d1f300f W1113 23:26:46.386] k8s-master-44273024-0 Ready 46s v1.18.0-alpha.0.840+8af6906d1f300f W1113 23:26:46.390] 2019/11/13 23:26:46 process.go:153: Running: kubectl --match-server-version=false version W1113 23:26:46.577] 2019/11/13 23:26:46 process.go:155: Step 'kubectl --match-server-version=false version' finished in 192.902594ms W1113 23:26:46.577] 2019/11/13 23:26:46 process.go:153: Running: make e2e-test I1113 23:26:46.678] go test -v -timeout=0 ./test/e2e "-ginkgo.noColor" I1113 23:28:12.087] === RUN TestE2E I1113 23:28:12.087] Running Suite: AzureDisk CSI Driver End-to-End Tests I1113 23:28:12.087] ==================================================== I1113 23:28:12.087] Random Seed: 1573687692 I1113 23:28:12.087] Will run 9 of 9 specs I1113 23:28:12.087] I1113 23:28:12.636] 2019/11/13 23:28:12 Attempting docker login with Azure service principal I1113 23:28:13.658] 2019/11/13 23:28:13 docker login is successful I1113 23:28:13.658] 2019/11/13 23:28:13 Installing Azure Disk CSI Driver... I1113 23:28:13.659] make[1]: Entering directory '/go/src/sigs.k8s.io/azuredisk-csi-driver' I1113 23:28:13.662] # Use v2.11.0 helm to match tiller's version in clusters made by aks-engine I1113 23:28:13.663] curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | DESIRED_VERSION=v2.11.0 bash I1113 23:28:13.739] % Total % Received % Xferd Average Speed Time Time Time Current I1113 23:28:13.739] Dload Upload Total Spent Left Speed I1113 23:28:13.791] 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 7164 100 7164 0 0 134k 0 --:--:-- --:--:-- --:--:-- 137k I1113 23:28:13.803] Downloading https://get.helm.sh/helm-v2.11.0-linux-amd64.tar.gz I1113 23:28:15.224] Preparing to install helm and tiller into /usr/local/bin I1113 23:28:15.251] helm installed into /usr/local/bin/helm I1113 23:28:15.279] tiller installed into /usr/local/bin/tiller I1113 23:28:15.281] Run 'helm init' to configure helm. I1113 23:28:15.296] # Make sure tiller is ready I1113 23:28:15.297] kubectl wait pod -l name=tiller --namespace kube-system --for condition=ready --timeout 5m I1113 23:28:20.841] pod/tiller-deploy-598f7df6d6-rk6wh condition met I1113 23:28:20.843] helm version I1113 23:28:20.903] Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"} I1113 23:28:21.034] Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"} I1113 23:28:21.037] # Only build and push the image if it does not exist in the registry I1113 23:28:21.037] docker pull k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35 || make azuredisk-container push I1113 23:28:21.524] e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35: Pulling from azuredisk-csi I1113 23:28:21.524] 5667fdb72017: Pulling fs layer I1113 23:28:21.524] d83811f270d5: Pulling fs layer I1113 23:28:21.524] ee671aafb583: Pulling fs layer I1113 23:28:21.524] 7fc152dfb3a6: Pulling fs layer I1113 23:28:21.524] 7940edb1bae7: Pulling fs layer I1113 23:28:21.525] 2ac04aec52fc: Pulling fs layer I1113 23:28:21.525] 3d25cb8b687d: Pulling fs layer I1113 23:28:21.525] df9549f7ba37: Pulling fs layer I1113 23:28:21.525] 099e251ab393: Pulling fs layer I1113 23:28:21.525] 7940edb1bae7: Waiting I1113 23:28:21.525] df9549f7ba37: Waiting I1113 23:28:21.525] 099e251ab393: Waiting I1113 23:28:21.525] 3d25cb8b687d: Waiting I1113 23:28:21.526] 7fc152dfb3a6: Waiting I1113 23:28:21.526] 2ac04aec52fc: Waiting I1113 23:28:21.704] ee671aafb583: Verifying Checksum I1113 23:28:21.704] ee671aafb583: Download complete I1113 23:28:21.752] d83811f270d5: Download complete I1113 23:28:21.831] 7fc152dfb3a6: Verifying Checksum I1113 23:28:21.831] 7fc152dfb3a6: Download complete I1113 23:28:21.851] 7940edb1bae7: Verifying Checksum I1113 23:28:21.852] 7940edb1bae7: Download complete I1113 23:28:21.986] 5667fdb72017: Verifying Checksum I1113 23:28:21.986] 5667fdb72017: Download complete I1113 23:28:22.254] df9549f7ba37: Download complete I1113 23:28:22.356] 3d25cb8b687d: Verifying Checksum I1113 23:28:22.356] 3d25cb8b687d: Download complete I1113 23:28:22.557] 2ac04aec52fc: Verifying Checksum I1113 23:28:22.558] 2ac04aec52fc: Download complete I1113 23:28:22.619] 099e251ab393: Verifying Checksum I1113 23:28:22.619] 099e251ab393: Download complete I1113 23:28:23.982] 5667fdb72017: Pull complete I1113 23:28:24.134] d83811f270d5: Pull complete I1113 23:28:24.284] ee671aafb583: Pull complete I1113 23:28:24.439] 7fc152dfb3a6: Pull complete I1113 23:28:24.596] 7940edb1bae7: Pull complete I1113 23:28:27.167] 2ac04aec52fc: Pull complete I1113 23:28:29.090] 3d25cb8b687d: Pull complete I1113 23:28:29.779] df9549f7ba37: Pull complete I1113 23:28:30.692] 099e251ab393: Pull complete I1113 23:28:30.717] Digest: sha256:76bc4ec5af84acf7250d3b43cf2d1ad6fb770a9e7405406791472fba9a6c6632 I1113 23:28:30.735] Status: Downloaded newer image for k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35 I1113 23:28:30.736] k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35 I1113 23:28:30.748] helm install charts/latest/azuredisk-csi-driver -n azuredisk-csi-driver --namespace kube-system --wait \ I1113 23:28:30.748] --set image.pullPolicy=IfNotPresent \ I1113 23:28:30.748] --set image.repository=k8sprowinternal.azurecr.io/azuredisk-csi \ I1113 23:28:30.748] --set image.tag=e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35 I1113 23:29:30.217] NAME: azuredisk-csi-driver I1113 23:29:31.218] LAST DEPLOYED: Wed Nov 13 23:28:31 2019 I1113 23:29:31.218] NAMESPACE: kube-system I1113 23:29:31.219] STATUS: DEPLOYED I1113 23:29:31.219] I1113 23:29:31.219] RESOURCES: I1113 23:29:31.219] ==> v1/ClusterRole I1113 23:29:31.219] NAME AGE I1113 23:29:31.219] azuredisk-cluster-driver-registrar-role 56s I1113 23:29:31.220] azuredisk-external-attacher-role 56s I1113 23:29:31.220] azuredisk-external-provisioner-role 56s I1113 23:29:31.220] azuredisk-external-snapshotter-role 56s I1113 23:29:31.220] I1113 23:29:31.220] ==> v1/ClusterRoleBinding I1113 23:29:31.220] NAME AGE I1113 23:29:31.220] azuredisk-csi-attacher-binding 56s I1113 23:29:31.220] azuredisk-csi-driver-registrar-binding 56s I1113 23:29:31.221] azuredisk-csi-provisioner-binding 56s I1113 23:29:31.221] azuredisk-csi-snapshotter-binding 56s I1113 23:29:31.221] I1113 23:29:31.221] ==> v1/DaemonSet I1113 23:29:31.221] NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE I1113 23:29:31.221] csi-azuredisk-node 2 2 2 2 2 beta.kubernetes.io/os=linux 56s I1113 23:29:31.221] I1113 23:29:31.221] ==> v1/Deployment I1113 23:29:31.221] NAME READY UP-TO-DATE AVAILABLE AGE I1113 23:29:31.222] csi-azuredisk-controller 1/1 1 1 55s I1113 23:29:31.222] I1113 23:29:31.222] ==> v1/Pod(related) I1113 23:29:31.222] NAME READY STATUS RESTARTS AGE I1113 23:29:31.222] csi-azuredisk-controller-98cdd8546-pfhjp 6/6 Running 0 55s I1113 23:29:31.222] csi-azuredisk-node-58qqk 3/3 Running 0 56s I1113 23:29:31.223] csi-azuredisk-node-rzmsd 3/3 Running 0 56s I1113 23:29:31.223] I1113 23:29:31.223] ==> v1/Service I1113 23:29:31.223] NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE I1113 23:29:31.223] csi-azuredisk-controller ClusterIP 10.0.13.1 10252/TCP 56s I1113 23:29:31.223] I1113 23:29:31.223] ==> v1/ServiceAccount I1113 23:29:31.224] NAME SECRETS AGE I1113 23:29:31.224] csi-azuredisk-controller-sa 1 56s I1113 23:29:31.224] I1113 23:29:31.224] ==> v1beta1/CustomResourceDefinition I1113 23:29:31.224] NAME AGE I1113 23:29:31.224] csidrivers.csi.storage.k8s.io 56s I1113 23:29:31.224] csinodeinfos.csi.storage.k8s.io 56s I1113 23:29:31.224] I1113 23:29:31.224] I1113 23:29:31.224] NOTES: I1113 23:29:31.224] The Azure Disk CSI Driver is getting deployed to your cluster. I1113 23:29:31.225] I1113 23:29:31.225] To check Azure Disk CSI Driver pods status, please run: I1113 23:29:31.225] I1113 23:29:31.225] kubectl --namespace=kube-system get pods --selector="release=azuredisk-csi-driver" --watch I1113 23:29:31.225] I1113 23:29:31.225] make[1]: Leaving directory '/go/src/sigs.k8s.io/azuredisk-csi-driver' I1113 23:29:31.225] 2019/11/13 23:29:31 Azure Disk CSI Driver installed I1113 23:31:45.833] • [SLOW TEST:134.612 seconds] I1113 23:31:45.834] Dynamic Provisioning I1113 23:31:45.834] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:31 I1113 23:31:45.834] should create a volume on demand I1113 23:31:45.834] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:46 I1113 23:31:45.834] ------------------------------ I1113 23:33:46.332] • [SLOW TEST:120.497 seconds] I1113 23:33:46.332] Dynamic Provisioning I1113 23:33:46.332] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:31 I1113 23:33:46.332] should create a raw block volume on demand I1113 23:33:46.332] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:68 I1113 23:33:46.333] ------------------------------ I1113 23:35:45.846] • [SLOW TEST:119.515 seconds] I1113 23:35:45.846] Dynamic Provisioning I1113 23:35:45.846] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:31 I1113 23:35:45.846] should create a volume on demand and mount it as readOnly in a pod I1113 23:35:45.847] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:92 I1113 23:35:45.847] ------------------------------ I1113 23:40:33.411] • [SLOW TEST:287.565 seconds] I1113 23:40:33.412] Dynamic Provisioning I1113 23:40:33.412] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:31 I1113 23:40:33.412] should create multiple PV objects, bind to PVCs and attach all to different pods on the same node I1113 23:40:33.412] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:116 I1113 23:40:33.413] ------------------------------ I1113 23:44:02.120] • [SLOW TEST:208.709 seconds] I1113 23:44:02.121] Dynamic Provisioning I1113 23:44:02.121] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:31 I1113 23:44:02.121] should create a deployment object, write and read to it, delete the pod and write and read to it again I1113 23:44:02.121] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:153 I1113 23:44:02.122] ------------------------------ I1113 23:44:30.437] • [SLOW TEST:28.316 seconds] I1113 23:44:30.437] Dynamic Provisioning I1113 23:44:30.437] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:31 I1113 23:44:30.437] should delete PV with reclaimPolicy "Delete" I1113 23:44:30.438] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:178 I1113 23:44:30.438] ------------------------------ I1113 23:44:58.969] STEP: Creating a kubernetes client I1113 23:44:58.969] Nov 13 23:44:30.437: INFO: >>> kubeConfig: /workspace/aks843422017/kubeconfig/kubeconfig.westus2.json I1113 23:44:58.970] STEP: Building a namespace api object, basename azuredisk I1113 23:44:58.970] STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in azuredisk-9378 I1113 23:44:58.970] STEP: Waiting for a default service account to be provisioned in namespace I1113 23:44:58.970] STEP: setting up the StorageClass I1113 23:44:58.970] STEP: creating a StorageClass I1113 23:44:58.971] STEP: setting up the PVC and PV I1113 23:44:58.971] STEP: creating a PVC I1113 23:44:58.971] STEP: waiting for PVC to be in phase "Bound" I1113 23:44:58.971] Nov 13 23:44:30.594: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-8phfc] to have phase Bound I1113 23:44:58.971] Nov 13 23:44:30.597: INFO: PersistentVolumeClaim pvc-8phfc found but phase is Pending instead of Bound. I1113 23:44:58.972] Nov 13 23:44:32.599: INFO: PersistentVolumeClaim pvc-8phfc found but phase is Pending instead of Bound. I1113 23:44:58.972] Nov 13 23:44:34.603: INFO: PersistentVolumeClaim pvc-8phfc found but phase is Pending instead of Bound. I1113 23:44:58.972] Nov 13 23:44:36.606: INFO: PersistentVolumeClaim pvc-8phfc found but phase is Pending instead of Bound. I1113 23:44:58.972] Nov 13 23:44:38.609: INFO: PersistentVolumeClaim pvc-8phfc found but phase is Pending instead of Bound. I1113 23:44:58.972] Nov 13 23:44:40.613: INFO: PersistentVolumeClaim pvc-8phfc found but phase is Pending instead of Bound. I1113 23:44:58.973] Nov 13 23:44:42.616: INFO: PersistentVolumeClaim pvc-8phfc found and phase=Bound (12.021649306s) I1113 23:44:58.973] STEP: checking the PVC I1113 23:44:58.973] STEP: validating provisioned PV I1113 23:44:58.973] STEP: checking the PV I1113 23:44:58.973] Nov 13 23:44:42.621: INFO: deleting PVC "azuredisk-9378"/"pvc-8phfc" I1113 23:44:58.973] Nov 13 23:44:42.621: INFO: Deleting PersistentVolumeClaim "pvc-8phfc" I1113 23:44:58.974] Nov 13 23:44:42.625: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9378 to be removed I1113 23:44:58.974] Nov 13 23:44:42.634: INFO: Claim "azuredisk-9378" in namespace "pvc-8phfc" doesn't exist in the system I1113 23:44:58.974] Nov 13 23:44:42.634: INFO: Waiting up to 10m0s for PersistentVolume pvc-2d2d421c-3639-4029-9a93-075e6ee7de4c to have phase Released I1113 23:44:58.974] Nov 13 23:44:42.636: INFO: PersistentVolume pvc-2d2d421c-3639-4029-9a93-075e6ee7de4c found but phase is Bound instead of Released. I1113 23:44:58.974] Nov 13 23:44:47.640: INFO: PersistentVolume pvc-2d2d421c-3639-4029-9a93-075e6ee7de4c found and phase=Released (5.00605736s) I1113 23:44:58.975] STEP: deleting PV "pvc-2d2d421c-3639-4029-9a93-075e6ee7de4c" I1113 23:44:58.975] Nov 13 23:44:47.640: INFO: Deleting PersistentVolume "pvc-2d2d421c-3639-4029-9a93-075e6ee7de4c" I1113 23:44:58.975] STEP: waiting for claim's PV "pvc-2d2d421c-3639-4029-9a93-075e6ee7de4c" to be deleted I1113 23:44:58.975] Nov 13 23:44:47.643: INFO: Waiting up to 10m0s for PersistentVolume pvc-2d2d421c-3639-4029-9a93-075e6ee7de4c to get deleted I1113 23:44:58.975] Nov 13 23:44:47.653: INFO: PersistentVolume pvc-2d2d421c-3639-4029-9a93-075e6ee7de4c found and phase=Released (9.575675ms) I1113 23:44:58.976] Nov 13 23:44:52.655: INFO: PersistentVolume pvc-2d2d421c-3639-4029-9a93-075e6ee7de4c was removed I1113 23:44:58.976] STEP: Collecting events from namespace "azuredisk-9378". I1113 23:44:58.976] STEP: Found 1 events. I1113 23:44:58.976] Nov 13 23:44:52.658: INFO: At 2019-11-13 23:44:40 +0000 UTC - event for pvc-8phfc: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-2d2d421c-3639-4029-9a93-075e6ee7de4c using kubernetes.io/azure-disk I1113 23:44:58.976] Nov 13 23:44:52.660: INFO: POD NODE PHASE GRACE CONDITIONS I1113 23:44:58.977] Nov 13 23:44:52.660: INFO: I1113 23:44:58.977] Nov 13 23:44:52.666: INFO: I1113 23:44:58.977] Logging node info for node k8s-agentpool1-44273024-vmss000000 I1113 23:44:58.981] Nov 13 23:44:52.668: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-agentpool1-44273024-vmss000000,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-agentpool1-44273024-vmss000000,UID:4e669861-a44b-4a2f-9d93-2925772781c2,ResourceVersion:3052,Generation:0,CreationTimestamp:2019-11-13 23:26:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{agentpool: agentpool1,beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_D4s_v3,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 0,kubernetes.azure.com/cluster: kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d,kubernetes.azure.com/role: agent,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-agentpool1-44273024-vmss000000,kubernetes.io/os: linux,kubernetes.io/role: agent,node-role.kubernetes.io/agent: ,node.kubernetes.io/instance-type: Standard_D4s_v3,storageprofile: managed,storagetier: Premium_LRS,topology.kubernetes.io/region: westus2,topology.kubernetes.io/zone: 0,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"disk.csi.azure.com":"k8s-agentpool1-44273024-vmss000000"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-44273024-vmss/virtualMachines/0,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{16797569024 0} {} 16403876Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{16011137024 0} {} 15635876Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 23:28:17 +0000 UTC 2019-11-13 23:28:17 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 23:39:52 +0000 UTC 2019-11-13 23:25:58 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 23:39:52 +0000 UTC 2019-11-13 23:25:58 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 23:39:52 +0000 UTC 2019-11-13 23:25:58 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 23:39:52 +0000 UTC 2019-11-13 23:26:14 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-agentpool1-44273024-vmss000000} {InternalIP 10.240.0.4}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ae0e9955280f4728b49bbd9c9f68fcbd,SystemUUID:12324627-8C81-844D-B4E0-91C2F50F5251,BootID:bdf3b855-bc57-466b-9864-58c6df92179c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.840+8af6906d1f300f,KubeProxyVersion:v1.18.0-alpha.0.840+8af6906d1f300f,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:2cf617ec5812780ff2d2908a03888bbbbb3ff3816c91c32baab228dc4b039cb7 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194753232421785601-a7885bd7] 852342180} {[k8sprowinternal.azurecr.io/azuredisk-csi@sha256:76bc4ec5af84acf7250d3b43cf2d1ad6fb770a9e7405406791472fba9a6c6632 k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35] 287084775} {[quay.io/k8scsi/csi-attacher@sha256:6425af42299ba211de685a94953a5c4c6fcbfd2494e445437dd9ebd70b28bf8a quay.io/k8scsi/csi-attacher:v1.0.1] 50168619} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0] 50148508} {[quay.io/k8scsi/csi-provisioner@sha256:7d7d832832b536f32e899669a32d4fb75ab972da20c21a2bd6043eb498cf58e8 quay.io/k8scsi/csi-provisioner:v1.0.1] 47974767} {[quay.io/k8scsi/csi-cluster-driver-registrar@sha256:fafd75ae5442f192cfa8c2e792903aee30d5884b62e802e4464b0a895d21e3ef quay.io/k8scsi/csi-cluster-driver-registrar:v1.0.1] 45874691} {[quay.io/k8scsi/csi-snapshotter@sha256:a49e0da1af6f2bf717e41ba1eee8b5e6a1cbd66a709dd92cc43fe475fe2589eb quay.io/k8scsi/csi-snapshotter:v1.1.0] 44325352} {[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2] 44100963} {[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13] 25024769} {[quay.io/k8scsi/livenessprobe@sha256:8c9c8a0f330acc2e558ae6624309447d02522e76aa35af23fa19b88abf4381cb quay.io/k8scsi/livenessprobe:v1.0.2] 17415381} {[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0] 15815995} {[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0] 14967303} {[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8] 1159025} {[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0] 738384}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 23:44:58.981] Nov 13 23:44:52.668: INFO: I1113 23:44:58.982] Logging kubelet events for node k8s-agentpool1-44273024-vmss000000 I1113 23:44:58.982] Nov 13 23:44:52.672: INFO: I1113 23:44:58.982] Logging pods the kubelet thinks is on node k8s-agentpool1-44273024-vmss000000 I1113 23:44:58.982] Nov 13 23:44:52.706: INFO: csi-azuredisk-node-rzmsd started at 2019-11-13 23:28:36 +0000 UTC (0+3 container statuses recorded) I1113 23:44:58.982] Nov 13 23:44:52.706: INFO: Container azuredisk ready: true, restart count 0 I1113 23:44:58.982] Nov 13 23:44:52.706: INFO: Container liveness-probe ready: true, restart count 0 I1113 23:44:58.983] Nov 13 23:44:52.706: INFO: Container node-driver-registrar ready: true, restart count 0 I1113 23:44:58.983] Nov 13 23:44:52.706: INFO: csi-azuredisk-controller-98cdd8546-pfhjp started at 2019-11-13 23:28:36 +0000 UTC (0+6 container statuses recorded) I1113 23:44:58.983] Nov 13 23:44:52.706: INFO: Container azuredisk ready: true, restart count 0 I1113 23:44:58.983] Nov 13 23:44:52.706: INFO: Container cluster-driver-registrar ready: true, restart count 0 I1113 23:44:58.983] Nov 13 23:44:52.706: INFO: Container csi-attacher ready: true, restart count 0 I1113 23:44:58.984] Nov 13 23:44:52.706: INFO: Container csi-provisioner ready: true, restart count 0 I1113 23:44:58.984] Nov 13 23:44:52.706: INFO: Container csi-snapshotter ready: true, restart count 0 I1113 23:44:58.984] Nov 13 23:44:52.706: INFO: Container liveness-probe ready: true, restart count 0 I1113 23:44:58.984] Nov 13 23:44:52.706: INFO: coredns-56bc7dfcc6-5kwmf started at 2019-11-13 23:26:22 +0000 UTC (0+1 container statuses recorded) I1113 23:44:58.984] Nov 13 23:44:52.706: INFO: Container coredns ready: true, restart count 0 I1113 23:44:58.985] Nov 13 23:44:52.706: INFO: azure-ip-masq-agent-2954j started at 2019-11-13 23:26:22 +0000 UTC (0+1 container statuses recorded) I1113 23:44:58.985] Nov 13 23:44:52.706: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 23:44:58.985] Nov 13 23:44:52.706: INFO: kube-proxy-6kkbg started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:44:58.985] Nov 13 23:44:52.706: INFO: Container kube-proxy ready: true, restart count 0 I1113 23:44:58.985] Nov 13 23:44:52.706: INFO: blobfuse-flexvol-installer-nr7t5 started at 2019-11-13 23:26:29 +0000 UTC (0+1 container statuses recorded) I1113 23:44:58.986] Nov 13 23:44:52.706: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 I1113 23:44:58.986] Nov 13 23:44:52.706: INFO: keyvault-flexvolume-x5lk9 started at 2019-11-13 23:26:29 +0000 UTC (0+1 container statuses recorded) I1113 23:44:58.986] Nov 13 23:44:52.706: INFO: Container keyvault-flexvolume ready: true, restart count 0 I1113 23:44:58.986] Nov 13 23:44:52.728: INFO: I1113 23:44:58.986] Latency metrics for node k8s-agentpool1-44273024-vmss000000 I1113 23:44:58.987] Nov 13 23:44:52.728: INFO: I1113 23:44:58.987] Logging node info for node k8s-agentpool1-44273024-vmss000001 I1113 23:44:58.991] Nov 13 23:44:52.744: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-agentpool1-44273024-vmss000001,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-agentpool1-44273024-vmss000001,UID:641e9d74-2ab0-4920-95f8-6d2f8b8b6b9d,ResourceVersion:3658,Generation:0,CreationTimestamp:2019-11-13 23:26:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{agentpool: agentpool1,beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_D4s_v3,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 1,kubernetes.azure.com/cluster: kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d,kubernetes.azure.com/role: agent,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-agentpool1-44273024-vmss000001,kubernetes.io/os: linux,kubernetes.io/role: agent,node-role.kubernetes.io/agent: ,node.kubernetes.io/instance-type: Standard_D4s_v3,storageprofile: managed,storagetier: Premium_LRS,topology.kubernetes.io/region: westus2,topology.kubernetes.io/zone: 1,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"disk.csi.azure.com":"k8s-agentpool1-44273024-vmss000001"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-44273024-vmss/virtualMachines/1,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{16797569024 0} {} 16403876Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{16011137024 0} {} 15635876Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 23:28:27 +0000 UTC 2019-11-13 23:28:27 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 23:43:22 +0000 UTC 2019-11-13 23:25:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 23:43:22 +0000 UTC 2019-11-13 23:25:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 23:43:22 +0000 UTC 2019-11-13 23:25:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 23:43:22 +0000 UTC 2019-11-13 23:26:14 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-agentpool1-44273024-vmss000001} {InternalIP 10.240.0.5}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8697623a6528429db5e42aa8d00e63f2,SystemUUID:2FDBB6C6-BD2E-C44C-8123-C51EA3C11DC8,BootID:532f95e3-1ca6-43c9-9c3c-df3aa68bcf17,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.840+8af6906d1f300f,KubeProxyVersion:v1.18.0-alpha.0.840+8af6906d1f300f,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:2cf617ec5812780ff2d2908a03888bbbbb3ff3816c91c32baab228dc4b039cb7 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194753232421785601-a7885bd7] 852342180} {[k8sprowinternal.azurecr.io/azuredisk-csi@sha256:76bc4ec5af84acf7250d3b43cf2d1ad6fb770a9e7405406791472fba9a6c6632 k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35] 287084775} {[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1] 121711221} {[gcr.io/kubernetes-helm/tiller@sha256:d52b34a9f9aeec1cf74155ca51fcbb5d872a705914565c782be4531790a4ee0e gcr.io/kubernetes-helm/tiller:v2.13.1] 82105090} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0] 50148508} {[k8s.gcr.io/metrics-server-amd64@sha256:e51071a0670e003e3936190cfda74cfd6edaa39e0d16fc0b5a5f09dbf09dd1ff k8s.gcr.io/metrics-server-amd64:v0.3.4] 39944451} {[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13] 25024769} {[quay.io/k8scsi/livenessprobe@sha256:8c9c8a0f330acc2e558ae6624309447d02522e76aa35af23fa19b88abf4381cb quay.io/k8scsi/livenessprobe:v1.0.2] 17415381} {[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0] 15815995} {[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8] 1159025} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0] 738384}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 23:44:58.991] Nov 13 23:44:52.744: INFO: I1113 23:44:58.991] Logging kubelet events for node k8s-agentpool1-44273024-vmss000001 I1113 23:44:58.991] Nov 13 23:44:52.749: INFO: I1113 23:44:58.991] Logging pods the kubelet thinks is on node k8s-agentpool1-44273024-vmss000001 I1113 23:44:58.992] Nov 13 23:44:52.775: INFO: azure-ip-masq-agent-8jdrr started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:44:58.992] Nov 13 23:44:52.775: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 23:44:58.992] Nov 13 23:44:52.775: INFO: metrics-server-855b565c8f-2pkh6 started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:44:58.992] Nov 13 23:44:52.775: INFO: Container metrics-server ready: true, restart count 0 I1113 23:44:58.993] Nov 13 23:44:52.775: INFO: tiller-deploy-598f7df6d6-rk6wh started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:44:58.993] Nov 13 23:44:52.775: INFO: Container tiller ready: true, restart count 0 I1113 23:44:58.993] Nov 13 23:44:52.775: INFO: kubernetes-dashboard-65966766b9-4q788 started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:44:58.993] Nov 13 23:44:52.775: INFO: Container kubernetes-dashboard ready: true, restart count 0 I1113 23:44:58.994] Nov 13 23:44:52.775: INFO: csi-azuredisk-node-58qqk started at 2019-11-13 23:28:36 +0000 UTC (0+3 container statuses recorded) I1113 23:44:58.994] Nov 13 23:44:52.775: INFO: Container azuredisk ready: true, restart count 0 I1113 23:44:58.994] Nov 13 23:44:52.775: INFO: Container liveness-probe ready: true, restart count 0 I1113 23:44:58.994] Nov 13 23:44:52.775: INFO: Container node-driver-registrar ready: true, restart count 0 I1113 23:44:58.994] Nov 13 23:44:52.775: INFO: blobfuse-flexvol-installer-7xs7x started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:44:58.995] Nov 13 23:44:52.775: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 I1113 23:44:58.995] Nov 13 23:44:52.775: INFO: keyvault-flexvolume-67fkw started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:44:58.995] Nov 13 23:44:52.775: INFO: Container keyvault-flexvolume ready: true, restart count 0 I1113 23:44:58.995] Nov 13 23:44:52.775: INFO: kube-proxy-mqh5j started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:44:58.995] Nov 13 23:44:52.775: INFO: Container kube-proxy ready: true, restart count 0 I1113 23:44:58.995] Nov 13 23:44:52.805: INFO: I1113 23:44:58.996] Latency metrics for node k8s-agentpool1-44273024-vmss000001 I1113 23:44:58.996] Nov 13 23:44:52.805: INFO: I1113 23:44:58.996] Logging node info for node k8s-master-44273024-0 I1113 23:44:58.999] Nov 13 23:44:52.808: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-master-44273024-0,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-master-44273024-0,UID:1df68afb-ff66-46e6-9f2d-01d6a70d956b,ResourceVersion:3399,Generation:0,CreationTimestamp:2019-11-13 23:26:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_DS2_v2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 0,kubernetes.azure.com/cluster: kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d,kubernetes.azure.com/role: master,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-master-44273024-0,kubernetes.io/os: linux,kubernetes.io/role: master,node-role.kubernetes.io/master: ,node.kubernetes.io/instance-type: Standard_DS2_v2,topology.kubernetes.io/region: westus2,topology.kubernetes.io/zone: 0,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d/providers/Microsoft.Compute/virtualMachines/k8s-master-44273024-0,Unschedulable:false,Taints:[{node-role.kubernetes.io/master true NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{7284883456 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6498451456 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 23:28:27 +0000 UTC 2019-11-13 23:28:27 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 23:41:51 +0000 UTC 2019-11-13 23:25:56 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 23:41:51 +0000 UTC 2019-11-13 23:25:56 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 23:41:51 +0000 UTC 2019-11-13 23:25:56 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 23:41:51 +0000 UTC 2019-11-13 23:26:14 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-master-44273024-0} {InternalIP 10.240.255.5}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5dc97d36044840d4920b2ebb04ea87e7,SystemUUID:4298184F-773E-914E-A7F6-B5CD36ECBE34,BootID:91fba9aa-1896-44f3-b603-d0f2a3dd60f2,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.840+8af6906d1f300f,KubeProxyVersion:v1.18.0-alpha.0.840+8af6906d1f300f,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:2cf617ec5812780ff2d2908a03888bbbbb3ff3816c91c32baab228dc4b039cb7 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194753232421785601-a7885bd7] 852342180} {[k8s.gcr.io/kube-addon-manager-amd64@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager-amd64:v9.0.2] 83076028} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0] 50148508} {[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0] 738384}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 23:44:58.999] Nov 13 23:44:52.808: INFO: I1113 23:44:58.999] Logging kubelet events for node k8s-master-44273024-0 I1113 23:44:58.999] Nov 13 23:44:52.812: INFO: I1113 23:44:59.000] Logging pods the kubelet thinks is on node k8s-master-44273024-0 I1113 23:44:59.000] Nov 13 23:44:52.839: INFO: kube-addon-manager-k8s-master-44273024-0 started at 2019-11-13 23:25:50 +0000 UTC (0+1 container statuses recorded) I1113 23:44:59.000] Nov 13 23:44:52.839: INFO: Container kube-addon-manager ready: true, restart count 0 I1113 23:44:59.000] Nov 13 23:44:52.839: INFO: kube-apiserver-k8s-master-44273024-0 started at 2019-11-13 23:25:50 +0000 UTC (0+1 container statuses recorded) I1113 23:44:59.000] Nov 13 23:44:52.839: INFO: Container kube-apiserver ready: true, restart count 0 I1113 23:44:59.001] Nov 13 23:44:52.839: INFO: kube-controller-manager-k8s-master-44273024-0 started at 2019-11-13 23:25:50 +0000 UTC (0+1 container statuses recorded) I1113 23:44:59.001] Nov 13 23:44:52.839: INFO: Container kube-controller-manager ready: true, restart count 0 I1113 23:44:59.001] Nov 13 23:44:52.839: INFO: kube-scheduler-k8s-master-44273024-0 started at 2019-11-13 23:25:50 +0000 UTC (0+1 container statuses recorded) I1113 23:44:59.001] Nov 13 23:44:52.839: INFO: Container kube-scheduler ready: true, restart count 0 I1113 23:44:59.002] Nov 13 23:44:52.839: INFO: azure-ip-masq-agent-hsglr started at 2019-11-13 23:26:22 +0000 UTC (0+1 container statuses recorded) I1113 23:44:59.002] Nov 13 23:44:52.839: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 23:44:59.002] Nov 13 23:44:52.839: INFO: kube-proxy-bdv7c started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:44:59.002] Nov 13 23:44:52.839: INFO: Container kube-proxy ready: true, restart count 0 I1113 23:44:59.002] Nov 13 23:44:52.869: INFO: I1113 23:44:59.002] Latency metrics for node k8s-master-44273024-0 I1113 23:44:59.003] Nov 13 23:44:52.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I1113 23:44:59.003] STEP: Destroying namespace "azuredisk-9378" for this suite. I1113 23:44:59.003] Nov 13 23:44:58.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered I1113 23:44:59.003] Nov 13 23:44:58.963: INFO: namespace azuredisk-9378 deletion completed in 6.090533918s I1113 23:44:59.003] •! Panic [28.526 seconds] I1113 23:44:59.004] Dynamic Provisioning I1113 23:44:59.004] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:31 I1113 23:44:59.004] [env] should retain PV with reclaimPolicy "Retain" [It] I1113 23:44:59.004] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:194 I1113 23:44:59.004] I1113 23:44:59.004] Test Panicked I1113 23:44:59.005] runtime error: invalid memory address or nil pointer dereference I1113 23:44:59.005] /usr/local/go/src/runtime/panic.go:199 I1113 23:44:59.005] I1113 23:44:59.005] Full Stack Trace I1113 23:44:59.005] /usr/local/go/src/runtime/panic.go:679 +0x1b2 I1113 23:44:59.005] sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).DeleteBackingVolume(0xc00009d2c0, 0xc0001f01b0) I1113 23:44:59.006] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:349 +0x42 I1113 23:44:59.006] sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedReclaimPolicyTest).Run(0xc000db73d8, 0x20432a0, 0xc000e503c0, 0xc000c402c0) I1113 23:44:59.006] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_reclaim_policy_tester.go:44 +0x1a7 I1113 23:44:59.006] sigs.k8s.io/azuredisk-csi-driver/test/e2e.glob..func1.8() I1113 23:44:59.006] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:208 +0x174 I1113 23:44:59.007] sigs.k8s.io/azuredisk-csi-driver/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00056d6e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) I1113 23:44:59.007] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:133 +0x96 I1113 23:44:59.007] testing.tRunner(0xc0004f8f00, 0x1dd2110) I1113 23:44:59.007] /usr/local/go/src/testing/testing.go:909 +0xc9 I1113 23:44:59.007] created by testing.(*T).Run I1113 23:44:59.007] /usr/local/go/src/testing/testing.go:960 +0x350 I1113 23:44:59.008] I1113 23:44:59.008] ------------------------------ I1113 23:45:21.071] STEP: Creating a kubernetes client I1113 23:45:21.072] Nov 13 23:44:58.963: INFO: >>> kubeConfig: /workspace/aks843422017/kubeconfig/kubeconfig.westus2.json I1113 23:45:21.072] STEP: Building a namespace api object, basename azuredisk I1113 23:45:21.072] STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in azuredisk-1309 I1113 23:45:21.072] STEP: Waiting for a default service account to be provisioned in namespace I1113 23:45:21.073] STEP: Successfully provisioned AzureDisk volume: "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d/providers/Microsoft.Compute/disks/pre-provisioned-readOnly" I1113 23:45:21.073] I1113 23:45:21.073] STEP: setting up the PV I1113 23:45:21.073] STEP: creating a PV I1113 23:45:21.074] Nov 13 23:45:09.751: INFO: Unexpected error occurred: PersistentVolume "azuredisk-1309-kubernetes.io-azure-disk-preprovsioned-pv-9lx2q" is invalid: spec.csi.driver: Invalid value: "kubernetes.io/azure-disk": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') I1113 23:45:21.074] STEP: Collecting events from namespace "azuredisk-1309". I1113 23:45:21.074] STEP: Found 0 events. I1113 23:45:21.074] Nov 13 23:45:09.756: INFO: POD NODE PHASE GRACE CONDITIONS I1113 23:45:21.074] Nov 13 23:45:09.756: INFO: I1113 23:45:21.074] Nov 13 23:45:09.761: INFO: I1113 23:45:21.074] Logging node info for node k8s-agentpool1-44273024-vmss000000 I1113 23:45:21.078] Nov 13 23:45:09.763: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-agentpool1-44273024-vmss000000,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-agentpool1-44273024-vmss000000,UID:4e669861-a44b-4a2f-9d93-2925772781c2,ResourceVersion:3941,Generation:0,CreationTimestamp:2019-11-13 23:26:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{agentpool: agentpool1,beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_D4s_v3,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 0,kubernetes.azure.com/cluster: kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d,kubernetes.azure.com/role: agent,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-agentpool1-44273024-vmss000000,kubernetes.io/os: linux,kubernetes.io/role: agent,node-role.kubernetes.io/agent: ,node.kubernetes.io/instance-type: Standard_D4s_v3,storageprofile: managed,storagetier: Premium_LRS,topology.kubernetes.io/region: westus2,topology.kubernetes.io/zone: 0,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"disk.csi.azure.com":"k8s-agentpool1-44273024-vmss000000"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-44273024-vmss/virtualMachines/0,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{16797569024 0} {} 16403876Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{16011137024 0} {} 15635876Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 23:28:17 +0000 UTC 2019-11-13 23:28:17 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 23:44:52 +0000 UTC 2019-11-13 23:25:58 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 23:44:52 +0000 UTC 2019-11-13 23:25:58 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 23:44:52 +0000 UTC 2019-11-13 23:25:58 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 23:44:52 +0000 UTC 2019-11-13 23:26:14 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-agentpool1-44273024-vmss000000} {InternalIP 10.240.0.4}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ae0e9955280f4728b49bbd9c9f68fcbd,SystemUUID:12324627-8C81-844D-B4E0-91C2F50F5251,BootID:bdf3b855-bc57-466b-9864-58c6df92179c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.840+8af6906d1f300f,KubeProxyVersion:v1.18.0-alpha.0.840+8af6906d1f300f,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:2cf617ec5812780ff2d2908a03888bbbbb3ff3816c91c32baab228dc4b039cb7 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194753232421785601-a7885bd7] 852342180} {[k8sprowinternal.azurecr.io/azuredisk-csi@sha256:76bc4ec5af84acf7250d3b43cf2d1ad6fb770a9e7405406791472fba9a6c6632 k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35] 287084775} {[quay.io/k8scsi/csi-attacher@sha256:6425af42299ba211de685a94953a5c4c6fcbfd2494e445437dd9ebd70b28bf8a quay.io/k8scsi/csi-attacher:v1.0.1] 50168619} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0] 50148508} {[quay.io/k8scsi/csi-provisioner@sha256:7d7d832832b536f32e899669a32d4fb75ab972da20c21a2bd6043eb498cf58e8 quay.io/k8scsi/csi-provisioner:v1.0.1] 47974767} {[quay.io/k8scsi/csi-cluster-driver-registrar@sha256:fafd75ae5442f192cfa8c2e792903aee30d5884b62e802e4464b0a895d21e3ef quay.io/k8scsi/csi-cluster-driver-registrar:v1.0.1] 45874691} {[quay.io/k8scsi/csi-snapshotter@sha256:a49e0da1af6f2bf717e41ba1eee8b5e6a1cbd66a709dd92cc43fe475fe2589eb quay.io/k8scsi/csi-snapshotter:v1.1.0] 44325352} {[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2] 44100963} {[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13] 25024769} {[quay.io/k8scsi/livenessprobe@sha256:8c9c8a0f330acc2e558ae6624309447d02522e76aa35af23fa19b88abf4381cb quay.io/k8scsi/livenessprobe:v1.0.2] 17415381} {[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0] 15815995} {[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0] 14967303} {[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8] 1159025} {[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0] 738384}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 23:45:21.078] Nov 13 23:45:09.764: INFO: I1113 23:45:21.078] Logging kubelet events for node k8s-agentpool1-44273024-vmss000000 I1113 23:45:21.078] Nov 13 23:45:09.768: INFO: I1113 23:45:21.078] Logging pods the kubelet thinks is on node k8s-agentpool1-44273024-vmss000000 I1113 23:45:21.079] Nov 13 23:45:09.775: INFO: keyvault-flexvolume-x5lk9 started at 2019-11-13 23:26:29 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.079] Nov 13 23:45:09.775: INFO: Container keyvault-flexvolume ready: true, restart count 0 I1113 23:45:21.079] Nov 13 23:45:09.775: INFO: csi-azuredisk-node-rzmsd started at 2019-11-13 23:28:36 +0000 UTC (0+3 container statuses recorded) I1113 23:45:21.079] Nov 13 23:45:09.775: INFO: Container azuredisk ready: true, restart count 0 I1113 23:45:21.079] Nov 13 23:45:09.775: INFO: Container liveness-probe ready: true, restart count 0 I1113 23:45:21.080] Nov 13 23:45:09.775: INFO: Container node-driver-registrar ready: true, restart count 0 I1113 23:45:21.080] Nov 13 23:45:09.775: INFO: csi-azuredisk-controller-98cdd8546-pfhjp started at 2019-11-13 23:28:36 +0000 UTC (0+6 container statuses recorded) I1113 23:45:21.080] Nov 13 23:45:09.775: INFO: Container azuredisk ready: true, restart count 0 I1113 23:45:21.080] Nov 13 23:45:09.775: INFO: Container cluster-driver-registrar ready: true, restart count 0 I1113 23:45:21.081] Nov 13 23:45:09.775: INFO: Container csi-attacher ready: true, restart count 0 I1113 23:45:21.081] Nov 13 23:45:09.775: INFO: Container csi-provisioner ready: true, restart count 0 I1113 23:45:21.081] Nov 13 23:45:09.775: INFO: Container csi-snapshotter ready: true, restart count 0 I1113 23:45:21.081] Nov 13 23:45:09.775: INFO: Container liveness-probe ready: true, restart count 0 I1113 23:45:21.081] Nov 13 23:45:09.775: INFO: coredns-56bc7dfcc6-5kwmf started at 2019-11-13 23:26:22 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.082] Nov 13 23:45:09.775: INFO: Container coredns ready: true, restart count 0 I1113 23:45:21.082] Nov 13 23:45:09.775: INFO: azure-ip-masq-agent-2954j started at 2019-11-13 23:26:22 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.082] Nov 13 23:45:09.775: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 23:45:21.082] Nov 13 23:45:09.775: INFO: kube-proxy-6kkbg started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.082] Nov 13 23:45:09.775: INFO: Container kube-proxy ready: true, restart count 0 I1113 23:45:21.083] Nov 13 23:45:09.775: INFO: blobfuse-flexvol-installer-nr7t5 started at 2019-11-13 23:26:29 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.083] Nov 13 23:45:09.775: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 I1113 23:45:21.083] Nov 13 23:45:09.797: INFO: I1113 23:45:21.083] Latency metrics for node k8s-agentpool1-44273024-vmss000000 I1113 23:45:21.083] Nov 13 23:45:09.797: INFO: I1113 23:45:21.084] Logging node info for node k8s-agentpool1-44273024-vmss000001 I1113 23:45:21.086] Nov 13 23:45:09.799: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-agentpool1-44273024-vmss000001,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-agentpool1-44273024-vmss000001,UID:641e9d74-2ab0-4920-95f8-6d2f8b8b6b9d,ResourceVersion:3658,Generation:0,CreationTimestamp:2019-11-13 23:26:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{agentpool: agentpool1,beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_D4s_v3,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 1,kubernetes.azure.com/cluster: kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d,kubernetes.azure.com/role: agent,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-agentpool1-44273024-vmss000001,kubernetes.io/os: linux,kubernetes.io/role: agent,node-role.kubernetes.io/agent: ,node.kubernetes.io/instance-type: Standard_D4s_v3,storageprofile: managed,storagetier: Premium_LRS,topology.kubernetes.io/region: westus2,topology.kubernetes.io/zone: 1,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"disk.csi.azure.com":"k8s-agentpool1-44273024-vmss000001"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-44273024-vmss/virtualMachines/1,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{16797569024 0} {} 16403876Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{16011137024 0} {} 15635876Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 23:28:27 +0000 UTC 2019-11-13 23:28:27 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 23:43:22 +0000 UTC 2019-11-13 23:25:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 23:43:22 +0000 UTC 2019-11-13 23:25:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 23:43:22 +0000 UTC 2019-11-13 23:25:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 23:43:22 +0000 UTC 2019-11-13 23:26:14 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-agentpool1-44273024-vmss000001} {InternalIP 10.240.0.5}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8697623a6528429db5e42aa8d00e63f2,SystemUUID:2FDBB6C6-BD2E-C44C-8123-C51EA3C11DC8,BootID:532f95e3-1ca6-43c9-9c3c-df3aa68bcf17,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.840+8af6906d1f300f,KubeProxyVersion:v1.18.0-alpha.0.840+8af6906d1f300f,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:2cf617ec5812780ff2d2908a03888bbbbb3ff3816c91c32baab228dc4b039cb7 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194753232421785601-a7885bd7] 852342180} {[k8sprowinternal.azurecr.io/azuredisk-csi@sha256:76bc4ec5af84acf7250d3b43cf2d1ad6fb770a9e7405406791472fba9a6c6632 k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35] 287084775} {[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1] 121711221} {[gcr.io/kubernetes-helm/tiller@sha256:d52b34a9f9aeec1cf74155ca51fcbb5d872a705914565c782be4531790a4ee0e gcr.io/kubernetes-helm/tiller:v2.13.1] 82105090} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0] 50148508} {[k8s.gcr.io/metrics-server-amd64@sha256:e51071a0670e003e3936190cfda74cfd6edaa39e0d16fc0b5a5f09dbf09dd1ff k8s.gcr.io/metrics-server-amd64:v0.3.4] 39944451} {[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13] 25024769} {[quay.io/k8scsi/livenessprobe@sha256:8c9c8a0f330acc2e558ae6624309447d02522e76aa35af23fa19b88abf4381cb quay.io/k8scsi/livenessprobe:v1.0.2] 17415381} {[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0] 15815995} {[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8] 1159025} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0] 738384}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 23:45:21.087] Nov 13 23:45:09.799: INFO: I1113 23:45:21.087] Logging kubelet events for node k8s-agentpool1-44273024-vmss000001 I1113 23:45:21.087] Nov 13 23:45:09.803: INFO: I1113 23:45:21.087] Logging pods the kubelet thinks is on node k8s-agentpool1-44273024-vmss000001 I1113 23:45:21.087] Nov 13 23:45:09.809: INFO: blobfuse-flexvol-installer-7xs7x started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.088] Nov 13 23:45:09.809: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 I1113 23:45:21.088] Nov 13 23:45:09.809: INFO: keyvault-flexvolume-67fkw started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.088] Nov 13 23:45:09.809: INFO: Container keyvault-flexvolume ready: true, restart count 0 I1113 23:45:21.088] Nov 13 23:45:09.809: INFO: kube-proxy-mqh5j started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.089] Nov 13 23:45:09.809: INFO: Container kube-proxy ready: true, restart count 0 I1113 23:45:21.089] Nov 13 23:45:09.809: INFO: azure-ip-masq-agent-8jdrr started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.089] Nov 13 23:45:09.809: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 23:45:21.089] Nov 13 23:45:09.809: INFO: metrics-server-855b565c8f-2pkh6 started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.089] Nov 13 23:45:09.809: INFO: Container metrics-server ready: true, restart count 0 I1113 23:45:21.090] Nov 13 23:45:09.809: INFO: tiller-deploy-598f7df6d6-rk6wh started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.090] Nov 13 23:45:09.809: INFO: Container tiller ready: true, restart count 0 I1113 23:45:21.090] Nov 13 23:45:09.809: INFO: kubernetes-dashboard-65966766b9-4q788 started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.090] Nov 13 23:45:09.809: INFO: Container kubernetes-dashboard ready: true, restart count 0 I1113 23:45:21.090] Nov 13 23:45:09.809: INFO: csi-azuredisk-node-58qqk started at 2019-11-13 23:28:36 +0000 UTC (0+3 container statuses recorded) I1113 23:45:21.091] Nov 13 23:45:09.809: INFO: Container azuredisk ready: true, restart count 0 I1113 23:45:21.091] Nov 13 23:45:09.809: INFO: Container liveness-probe ready: true, restart count 0 I1113 23:45:21.091] Nov 13 23:45:09.809: INFO: Container node-driver-registrar ready: true, restart count 0 I1113 23:45:21.091] Nov 13 23:45:09.833: INFO: I1113 23:45:21.091] Latency metrics for node k8s-agentpool1-44273024-vmss000001 I1113 23:45:21.091] Nov 13 23:45:09.833: INFO: I1113 23:45:21.092] Logging node info for node k8s-master-44273024-0 I1113 23:45:21.094] Nov 13 23:45:09.835: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-master-44273024-0,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-master-44273024-0,UID:1df68afb-ff66-46e6-9f2d-01d6a70d956b,ResourceVersion:3399,Generation:0,CreationTimestamp:2019-11-13 23:26:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_DS2_v2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 0,kubernetes.azure.com/cluster: kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d,kubernetes.azure.com/role: master,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-master-44273024-0,kubernetes.io/os: linux,kubernetes.io/role: master,node-role.kubernetes.io/master: ,node.kubernetes.io/instance-type: Standard_DS2_v2,topology.kubernetes.io/region: westus2,topology.kubernetes.io/zone: 0,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d/providers/Microsoft.Compute/virtualMachines/k8s-master-44273024-0,Unschedulable:false,Taints:[{node-role.kubernetes.io/master true NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{7284883456 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6498451456 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 23:28:27 +0000 UTC 2019-11-13 23:28:27 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 23:41:51 +0000 UTC 2019-11-13 23:25:56 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 23:41:51 +0000 UTC 2019-11-13 23:25:56 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 23:41:51 +0000 UTC 2019-11-13 23:25:56 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 23:41:51 +0000 UTC 2019-11-13 23:26:14 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-master-44273024-0} {InternalIP 10.240.255.5}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5dc97d36044840d4920b2ebb04ea87e7,SystemUUID:4298184F-773E-914E-A7F6-B5CD36ECBE34,BootID:91fba9aa-1896-44f3-b603-d0f2a3dd60f2,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.840+8af6906d1f300f,KubeProxyVersion:v1.18.0-alpha.0.840+8af6906d1f300f,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:2cf617ec5812780ff2d2908a03888bbbbb3ff3816c91c32baab228dc4b039cb7 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194753232421785601-a7885bd7] 852342180} {[k8s.gcr.io/kube-addon-manager-amd64@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager-amd64:v9.0.2] 83076028} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0] 50148508} {[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0] 738384}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 23:45:21.094] Nov 13 23:45:09.835: INFO: I1113 23:45:21.094] Logging kubelet events for node k8s-master-44273024-0 I1113 23:45:21.094] Nov 13 23:45:09.841: INFO: I1113 23:45:21.094] Logging pods the kubelet thinks is on node k8s-master-44273024-0 I1113 23:45:21.095] Nov 13 23:45:09.846: INFO: kube-proxy-bdv7c started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.095] Nov 13 23:45:09.846: INFO: Container kube-proxy ready: true, restart count 0 I1113 23:45:21.095] Nov 13 23:45:09.846: INFO: kube-addon-manager-k8s-master-44273024-0 started at 2019-11-13 23:25:50 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.095] Nov 13 23:45:09.846: INFO: Container kube-addon-manager ready: true, restart count 0 I1113 23:45:21.095] Nov 13 23:45:09.846: INFO: kube-apiserver-k8s-master-44273024-0 started at 2019-11-13 23:25:50 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.096] Nov 13 23:45:09.846: INFO: Container kube-apiserver ready: true, restart count 0 I1113 23:45:21.096] Nov 13 23:45:09.846: INFO: kube-controller-manager-k8s-master-44273024-0 started at 2019-11-13 23:25:50 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.096] Nov 13 23:45:09.846: INFO: Container kube-controller-manager ready: true, restart count 0 I1113 23:45:21.096] Nov 13 23:45:09.846: INFO: kube-scheduler-k8s-master-44273024-0 started at 2019-11-13 23:25:50 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.096] Nov 13 23:45:09.846: INFO: Container kube-scheduler ready: true, restart count 0 I1113 23:45:21.097] Nov 13 23:45:09.846: INFO: azure-ip-masq-agent-hsglr started at 2019-11-13 23:26:22 +0000 UTC (0+1 container statuses recorded) I1113 23:45:21.097] Nov 13 23:45:09.846: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 23:45:21.097] Nov 13 23:45:09.872: INFO: I1113 23:45:21.097] Latency metrics for node k8s-master-44273024-0 I1113 23:45:21.097] Nov 13 23:45:09.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I1113 23:45:21.097] STEP: Destroying namespace "azuredisk-1309" for this suite. I1113 23:45:21.097] Nov 13 23:45:15.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered I1113 23:45:21.098] Nov 13 23:45:15.968: INFO: namespace azuredisk-1309 deletion completed in 6.088910899s I1113 23:45:21.098] • Failure [22.108 seconds] I1113 23:45:21.098] [azuredisk-csi-e2e] [single-az] Pre-Provisioned I1113 23:45:21.098] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning.go:36 I1113 23:45:21.098] [env] should use a pre-provisioned volume and mount it as readOnly in a pod [It] I1113 23:45:21.098] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning.go:66 I1113 23:45:21.098] I1113 23:45:21.098] Unexpected error: I1113 23:45:21.099] <*errors.StatusError | 0xc000913900>: { I1113 23:45:21.099] ErrStatus: { I1113 23:45:21.099] TypeMeta: {Kind: "", APIVersion: ""}, I1113 23:45:21.099] ListMeta: { I1113 23:45:21.099] SelfLink: "", I1113 23:45:21.099] ResourceVersion: "", I1113 23:45:21.099] Continue: "", I1113 23:45:21.100] RemainingItemCount: nil, I1113 23:45:21.100] }, I1113 23:45:21.100] Status: "Failure", I1113 23:45:21.100] Message: "PersistentVolume \"azuredisk-1309-kubernetes.io-azure-disk-preprovsioned-pv-9lx2q\" is invalid: spec.csi.driver: Invalid value: \"kubernetes.io/azure-disk\": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')", I1113 23:45:21.100] Reason: "Invalid", I1113 23:45:21.100] Details: { I1113 23:45:21.101] Name: "azuredisk-1309-kubernetes.io-azure-disk-preprovsioned-pv-9lx2q", I1113 23:45:21.101] Group: "", I1113 23:45:21.101] Kind: "PersistentVolume", I1113 23:45:21.101] UID: "", I1113 23:45:21.101] Causes: [ I1113 23:45:21.101] { I1113 23:45:21.101] Type: "FieldValueInvalid", I1113 23:45:21.102] Message: "Invalid value: \"kubernetes.io/azure-disk\": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')", I1113 23:45:21.102] Field: "spec.csi.driver", I1113 23:45:21.102] }, I1113 23:45:21.102] ], I1113 23:45:21.102] RetryAfterSeconds: 0, I1113 23:45:21.102] }, I1113 23:45:21.102] Code: 422, I1113 23:45:21.103] }, I1113 23:45:21.103] } I1113 23:45:21.103] PersistentVolume "azuredisk-1309-kubernetes.io-azure-disk-preprovsioned-pv-9lx2q" is invalid: spec.csi.driver: Invalid value: "kubernetes.io/azure-disk": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') I1113 23:45:21.103] occurred I1113 23:45:21.103] I1113 23:45:21.103] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:174 I1113 23:45:21.104] ------------------------------ I1113 23:45:42.803] STEP: Creating a kubernetes client I1113 23:45:42.803] Nov 13 23:45:21.071: INFO: >>> kubeConfig: /workspace/aks843422017/kubeconfig/kubeconfig.westus2.json I1113 23:45:42.804] STEP: Building a namespace api object, basename azuredisk I1113 23:45:42.804] STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in azuredisk-7097 I1113 23:45:42.804] STEP: Waiting for a default service account to be provisioned in namespace I1113 23:45:42.804] STEP: Successfully provisioned AzureDisk volume: "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d/providers/Microsoft.Compute/disks/pre-provisioned-retain-reclaimPolicy" I1113 23:45:42.805] I1113 23:45:42.805] STEP: setting up the PV I1113 23:45:42.805] STEP: creating a PV I1113 23:45:42.806] Nov 13 23:45:31.452: INFO: Unexpected error occurred: PersistentVolume "azuredisk-7097-kubernetes.io-azure-disk-preprovsioned-pv-dgk6x" is invalid: spec.csi.driver: Invalid value: "kubernetes.io/azure-disk": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') I1113 23:45:42.806] STEP: Collecting events from namespace "azuredisk-7097". I1113 23:45:42.806] STEP: Found 0 events. I1113 23:45:42.806] Nov 13 23:45:31.456: INFO: POD NODE PHASE GRACE CONDITIONS I1113 23:45:42.807] Nov 13 23:45:31.456: INFO: I1113 23:45:42.807] Nov 13 23:45:31.461: INFO: I1113 23:45:42.807] Logging node info for node k8s-agentpool1-44273024-vmss000000 I1113 23:45:42.810] Nov 13 23:45:31.463: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-agentpool1-44273024-vmss000000,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-agentpool1-44273024-vmss000000,UID:4e669861-a44b-4a2f-9d93-2925772781c2,ResourceVersion:3941,Generation:0,CreationTimestamp:2019-11-13 23:26:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{agentpool: agentpool1,beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_D4s_v3,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 0,kubernetes.azure.com/cluster: kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d,kubernetes.azure.com/role: agent,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-agentpool1-44273024-vmss000000,kubernetes.io/os: linux,kubernetes.io/role: agent,node-role.kubernetes.io/agent: ,node.kubernetes.io/instance-type: Standard_D4s_v3,storageprofile: managed,storagetier: Premium_LRS,topology.kubernetes.io/region: westus2,topology.kubernetes.io/zone: 0,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"disk.csi.azure.com":"k8s-agentpool1-44273024-vmss000000"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-44273024-vmss/virtualMachines/0,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{16797569024 0} {} 16403876Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{16011137024 0} {} 15635876Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 23:28:17 +0000 UTC 2019-11-13 23:28:17 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 23:44:52 +0000 UTC 2019-11-13 23:25:58 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 23:44:52 +0000 UTC 2019-11-13 23:25:58 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 23:44:52 +0000 UTC 2019-11-13 23:25:58 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 23:44:52 +0000 UTC 2019-11-13 23:26:14 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-agentpool1-44273024-vmss000000} {InternalIP 10.240.0.4}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ae0e9955280f4728b49bbd9c9f68fcbd,SystemUUID:12324627-8C81-844D-B4E0-91C2F50F5251,BootID:bdf3b855-bc57-466b-9864-58c6df92179c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.840+8af6906d1f300f,KubeProxyVersion:v1.18.0-alpha.0.840+8af6906d1f300f,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:2cf617ec5812780ff2d2908a03888bbbbb3ff3816c91c32baab228dc4b039cb7 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194753232421785601-a7885bd7] 852342180} {[k8sprowinternal.azurecr.io/azuredisk-csi@sha256:76bc4ec5af84acf7250d3b43cf2d1ad6fb770a9e7405406791472fba9a6c6632 k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35] 287084775} {[quay.io/k8scsi/csi-attacher@sha256:6425af42299ba211de685a94953a5c4c6fcbfd2494e445437dd9ebd70b28bf8a quay.io/k8scsi/csi-attacher:v1.0.1] 50168619} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0] 50148508} {[quay.io/k8scsi/csi-provisioner@sha256:7d7d832832b536f32e899669a32d4fb75ab972da20c21a2bd6043eb498cf58e8 quay.io/k8scsi/csi-provisioner:v1.0.1] 47974767} {[quay.io/k8scsi/csi-cluster-driver-registrar@sha256:fafd75ae5442f192cfa8c2e792903aee30d5884b62e802e4464b0a895d21e3ef quay.io/k8scsi/csi-cluster-driver-registrar:v1.0.1] 45874691} {[quay.io/k8scsi/csi-snapshotter@sha256:a49e0da1af6f2bf717e41ba1eee8b5e6a1cbd66a709dd92cc43fe475fe2589eb quay.io/k8scsi/csi-snapshotter:v1.1.0] 44325352} {[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2] 44100963} {[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13] 25024769} {[quay.io/k8scsi/livenessprobe@sha256:8c9c8a0f330acc2e558ae6624309447d02522e76aa35af23fa19b88abf4381cb quay.io/k8scsi/livenessprobe:v1.0.2] 17415381} {[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0] 15815995} {[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0] 14967303} {[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8] 1159025} {[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0] 738384}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 23:45:42.811] Nov 13 23:45:31.463: INFO: I1113 23:45:42.811] Logging kubelet events for node k8s-agentpool1-44273024-vmss000000 I1113 23:45:42.811] Nov 13 23:45:31.467: INFO: I1113 23:45:42.811] Logging pods the kubelet thinks is on node k8s-agentpool1-44273024-vmss000000 I1113 23:45:42.811] Nov 13 23:45:31.479: INFO: csi-azuredisk-node-rzmsd started at 2019-11-13 23:28:36 +0000 UTC (0+3 container statuses recorded) I1113 23:45:42.811] Nov 13 23:45:31.479: INFO: Container azuredisk ready: true, restart count 0 I1113 23:45:42.812] Nov 13 23:45:31.479: INFO: Container liveness-probe ready: true, restart count 0 I1113 23:45:42.812] Nov 13 23:45:31.479: INFO: Container node-driver-registrar ready: true, restart count 0 I1113 23:45:42.812] Nov 13 23:45:31.479: INFO: csi-azuredisk-controller-98cdd8546-pfhjp started at 2019-11-13 23:28:36 +0000 UTC (0+6 container statuses recorded) I1113 23:45:42.812] Nov 13 23:45:31.479: INFO: Container azuredisk ready: true, restart count 0 I1113 23:45:42.812] Nov 13 23:45:31.479: INFO: Container cluster-driver-registrar ready: true, restart count 0 I1113 23:45:42.812] Nov 13 23:45:31.479: INFO: Container csi-attacher ready: true, restart count 0 I1113 23:45:42.813] Nov 13 23:45:31.479: INFO: Container csi-provisioner ready: true, restart count 0 I1113 23:45:42.813] Nov 13 23:45:31.479: INFO: Container csi-snapshotter ready: true, restart count 0 I1113 23:45:42.813] Nov 13 23:45:31.479: INFO: Container liveness-probe ready: true, restart count 0 I1113 23:45:42.813] Nov 13 23:45:31.479: INFO: coredns-56bc7dfcc6-5kwmf started at 2019-11-13 23:26:22 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.813] Nov 13 23:45:31.479: INFO: Container coredns ready: true, restart count 0 I1113 23:45:42.814] Nov 13 23:45:31.479: INFO: azure-ip-masq-agent-2954j started at 2019-11-13 23:26:22 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.814] Nov 13 23:45:31.479: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 23:45:42.814] Nov 13 23:45:31.479: INFO: kube-proxy-6kkbg started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.814] Nov 13 23:45:31.479: INFO: Container kube-proxy ready: true, restart count 0 I1113 23:45:42.814] Nov 13 23:45:31.479: INFO: blobfuse-flexvol-installer-nr7t5 started at 2019-11-13 23:26:29 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.815] Nov 13 23:45:31.479: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 I1113 23:45:42.815] Nov 13 23:45:31.479: INFO: keyvault-flexvolume-x5lk9 started at 2019-11-13 23:26:29 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.815] Nov 13 23:45:31.479: INFO: Container keyvault-flexvolume ready: true, restart count 0 I1113 23:45:42.815] Nov 13 23:45:31.502: INFO: I1113 23:45:42.815] Latency metrics for node k8s-agentpool1-44273024-vmss000000 I1113 23:45:42.815] Nov 13 23:45:31.502: INFO: I1113 23:45:42.815] Logging node info for node k8s-agentpool1-44273024-vmss000001 I1113 23:45:42.819] Nov 13 23:45:31.504: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-agentpool1-44273024-vmss000001,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-agentpool1-44273024-vmss000001,UID:641e9d74-2ab0-4920-95f8-6d2f8b8b6b9d,ResourceVersion:3658,Generation:0,CreationTimestamp:2019-11-13 23:26:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{agentpool: agentpool1,beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_D4s_v3,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 1,kubernetes.azure.com/cluster: kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d,kubernetes.azure.com/role: agent,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-agentpool1-44273024-vmss000001,kubernetes.io/os: linux,kubernetes.io/role: agent,node-role.kubernetes.io/agent: ,node.kubernetes.io/instance-type: Standard_D4s_v3,storageprofile: managed,storagetier: Premium_LRS,topology.kubernetes.io/region: westus2,topology.kubernetes.io/zone: 1,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"disk.csi.azure.com":"k8s-agentpool1-44273024-vmss000001"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-44273024-vmss/virtualMachines/1,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{16797569024 0} {} 16403876Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{16011137024 0} {} 15635876Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 23:28:27 +0000 UTC 2019-11-13 23:28:27 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 23:43:22 +0000 UTC 2019-11-13 23:25:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 23:43:22 +0000 UTC 2019-11-13 23:25:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 23:43:22 +0000 UTC 2019-11-13 23:25:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 23:43:22 +0000 UTC 2019-11-13 23:26:14 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-agentpool1-44273024-vmss000001} {InternalIP 10.240.0.5}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8697623a6528429db5e42aa8d00e63f2,SystemUUID:2FDBB6C6-BD2E-C44C-8123-C51EA3C11DC8,BootID:532f95e3-1ca6-43c9-9c3c-df3aa68bcf17,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.840+8af6906d1f300f,KubeProxyVersion:v1.18.0-alpha.0.840+8af6906d1f300f,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:2cf617ec5812780ff2d2908a03888bbbbb3ff3816c91c32baab228dc4b039cb7 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194753232421785601-a7885bd7] 852342180} {[k8sprowinternal.azurecr.io/azuredisk-csi@sha256:76bc4ec5af84acf7250d3b43cf2d1ad6fb770a9e7405406791472fba9a6c6632 k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35] 287084775} {[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1] 121711221} {[gcr.io/kubernetes-helm/tiller@sha256:d52b34a9f9aeec1cf74155ca51fcbb5d872a705914565c782be4531790a4ee0e gcr.io/kubernetes-helm/tiller:v2.13.1] 82105090} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0] 50148508} {[k8s.gcr.io/metrics-server-amd64@sha256:e51071a0670e003e3936190cfda74cfd6edaa39e0d16fc0b5a5f09dbf09dd1ff k8s.gcr.io/metrics-server-amd64:v0.3.4] 39944451} {[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13] 25024769} {[quay.io/k8scsi/livenessprobe@sha256:8c9c8a0f330acc2e558ae6624309447d02522e76aa35af23fa19b88abf4381cb quay.io/k8scsi/livenessprobe:v1.0.2] 17415381} {[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0] 15815995} {[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8] 1159025} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0] 738384}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 23:45:42.823] Nov 13 23:45:31.504: INFO: I1113 23:45:42.824] Logging kubelet events for node k8s-agentpool1-44273024-vmss000001 I1113 23:45:42.824] Nov 13 23:45:31.508: INFO: I1113 23:45:42.824] Logging pods the kubelet thinks is on node k8s-agentpool1-44273024-vmss000001 I1113 23:45:42.824] Nov 13 23:45:31.513: INFO: blobfuse-flexvol-installer-7xs7x started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.825] Nov 13 23:45:31.513: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 I1113 23:45:42.825] Nov 13 23:45:31.513: INFO: keyvault-flexvolume-67fkw started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.825] Nov 13 23:45:31.513: INFO: Container keyvault-flexvolume ready: true, restart count 0 I1113 23:45:42.825] Nov 13 23:45:31.513: INFO: kube-proxy-mqh5j started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.826] Nov 13 23:45:31.513: INFO: Container kube-proxy ready: true, restart count 0 I1113 23:45:42.826] Nov 13 23:45:31.513: INFO: azure-ip-masq-agent-8jdrr started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.826] Nov 13 23:45:31.513: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 23:45:42.826] Nov 13 23:45:31.513: INFO: metrics-server-855b565c8f-2pkh6 started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.827] Nov 13 23:45:31.513: INFO: Container metrics-server ready: true, restart count 0 I1113 23:45:42.827] Nov 13 23:45:31.513: INFO: tiller-deploy-598f7df6d6-rk6wh started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.827] Nov 13 23:45:31.513: INFO: Container tiller ready: true, restart count 0 I1113 23:45:42.827] Nov 13 23:45:31.513: INFO: kubernetes-dashboard-65966766b9-4q788 started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.827] Nov 13 23:45:31.513: INFO: Container kubernetes-dashboard ready: true, restart count 0 I1113 23:45:42.827] Nov 13 23:45:31.513: INFO: csi-azuredisk-node-58qqk started at 2019-11-13 23:28:36 +0000 UTC (0+3 container statuses recorded) I1113 23:45:42.828] Nov 13 23:45:31.513: INFO: Container azuredisk ready: true, restart count 0 I1113 23:45:42.828] Nov 13 23:45:31.513: INFO: Container liveness-probe ready: true, restart count 0 I1113 23:45:42.828] Nov 13 23:45:31.513: INFO: Container node-driver-registrar ready: true, restart count 0 I1113 23:45:42.828] Nov 13 23:45:31.538: INFO: I1113 23:45:42.828] Latency metrics for node k8s-agentpool1-44273024-vmss000001 I1113 23:45:42.828] Nov 13 23:45:31.538: INFO: I1113 23:45:42.828] Logging node info for node k8s-master-44273024-0 I1113 23:45:42.831] Nov 13 23:45:31.540: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-master-44273024-0,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-master-44273024-0,UID:1df68afb-ff66-46e6-9f2d-01d6a70d956b,ResourceVersion:3399,Generation:0,CreationTimestamp:2019-11-13 23:26:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_DS2_v2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 0,kubernetes.azure.com/cluster: kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d,kubernetes.azure.com/role: master,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-master-44273024-0,kubernetes.io/os: linux,kubernetes.io/role: master,node-role.kubernetes.io/master: ,node.kubernetes.io/instance-type: Standard_DS2_v2,topology.kubernetes.io/region: westus2,topology.kubernetes.io/zone: 0,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d/providers/Microsoft.Compute/virtualMachines/k8s-master-44273024-0,Unschedulable:false,Taints:[{node-role.kubernetes.io/master true NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{7284883456 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6498451456 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 23:28:27 +0000 UTC 2019-11-13 23:28:27 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 23:41:51 +0000 UTC 2019-11-13 23:25:56 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 23:41:51 +0000 UTC 2019-11-13 23:25:56 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 23:41:51 +0000 UTC 2019-11-13 23:25:56 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 23:41:51 +0000 UTC 2019-11-13 23:26:14 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-master-44273024-0} {InternalIP 10.240.255.5}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5dc97d36044840d4920b2ebb04ea87e7,SystemUUID:4298184F-773E-914E-A7F6-B5CD36ECBE34,BootID:91fba9aa-1896-44f3-b603-d0f2a3dd60f2,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.840+8af6906d1f300f,KubeProxyVersion:v1.18.0-alpha.0.840+8af6906d1f300f,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:2cf617ec5812780ff2d2908a03888bbbbb3ff3816c91c32baab228dc4b039cb7 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194753232421785601-a7885bd7] 852342180} {[k8s.gcr.io/kube-addon-manager-amd64@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager-amd64:v9.0.2] 83076028} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0] 50148508} {[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0] 738384}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 23:45:42.831] Nov 13 23:45:31.540: INFO: I1113 23:45:42.831] Logging kubelet events for node k8s-master-44273024-0 I1113 23:45:42.831] Nov 13 23:45:31.544: INFO: I1113 23:45:42.831] Logging pods the kubelet thinks is on node k8s-master-44273024-0 I1113 23:45:42.832] Nov 13 23:45:31.549: INFO: kube-proxy-bdv7c started at 2019-11-13 23:26:23 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.832] Nov 13 23:45:31.549: INFO: Container kube-proxy ready: true, restart count 0 I1113 23:45:42.832] Nov 13 23:45:31.549: INFO: kube-addon-manager-k8s-master-44273024-0 started at 2019-11-13 23:25:50 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.832] Nov 13 23:45:31.549: INFO: Container kube-addon-manager ready: true, restart count 0 I1113 23:45:42.832] Nov 13 23:45:31.549: INFO: kube-apiserver-k8s-master-44273024-0 started at 2019-11-13 23:25:50 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.833] Nov 13 23:45:31.549: INFO: Container kube-apiserver ready: true, restart count 0 I1113 23:45:42.833] Nov 13 23:45:31.549: INFO: kube-controller-manager-k8s-master-44273024-0 started at 2019-11-13 23:25:50 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.833] Nov 13 23:45:31.549: INFO: Container kube-controller-manager ready: true, restart count 0 I1113 23:45:42.833] Nov 13 23:45:31.549: INFO: kube-scheduler-k8s-master-44273024-0 started at 2019-11-13 23:25:50 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.833] Nov 13 23:45:31.549: INFO: Container kube-scheduler ready: true, restart count 0 I1113 23:45:42.834] Nov 13 23:45:31.549: INFO: azure-ip-masq-agent-hsglr started at 2019-11-13 23:26:22 +0000 UTC (0+1 container statuses recorded) I1113 23:45:42.834] Nov 13 23:45:31.549: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 23:45:42.834] Nov 13 23:45:31.567: INFO: I1113 23:45:42.834] Latency metrics for node k8s-master-44273024-0 I1113 23:45:42.834] Nov 13 23:45:31.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I1113 23:45:42.834] STEP: Destroying namespace "azuredisk-7097" for this suite. I1113 23:45:42.835] Nov 13 23:45:37.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered I1113 23:45:42.835] Nov 13 23:45:37.670: INFO: namespace azuredisk-7097 deletion completed in 6.100111279s I1113 23:45:42.835] • Failure [21.731 seconds] I1113 23:45:42.835] [azuredisk-csi-e2e] [single-az] Pre-Provisioned I1113 23:45:42.835] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning.go:36 I1113 23:45:42.835] [env] should use a pre-provisioned volume and retain PV with reclaimPolicy "Retain" [It] I1113 23:45:42.836] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning.go:100 I1113 23:45:42.836] I1113 23:45:42.836] Unexpected error: I1113 23:45:42.836] <*errors.StatusError | 0xc0008d4a00>: { I1113 23:45:42.836] ErrStatus: { I1113 23:45:42.837] TypeMeta: {Kind: "", APIVersion: ""}, I1113 23:45:42.837] ListMeta: { I1113 23:45:42.837] SelfLink: "", I1113 23:45:42.837] ResourceVersion: "", I1113 23:45:42.837] Continue: "", I1113 23:45:42.837] RemainingItemCount: nil, I1113 23:45:42.837] }, I1113 23:45:42.837] Status: "Failure", I1113 23:45:42.838] Message: "PersistentVolume \"azuredisk-7097-kubernetes.io-azure-disk-preprovsioned-pv-dgk6x\" is invalid: spec.csi.driver: Invalid value: \"kubernetes.io/azure-disk\": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')", I1113 23:45:42.838] Reason: "Invalid", I1113 23:45:42.838] Details: { I1113 23:45:42.838] Name: "azuredisk-7097-kubernetes.io-azure-disk-preprovsioned-pv-dgk6x", I1113 23:45:42.838] Group: "", I1113 23:45:42.838] Kind: "PersistentVolume", I1113 23:45:42.838] UID: "", I1113 23:45:42.839] Causes: [ I1113 23:45:42.839] { I1113 23:45:42.839] Type: "FieldValueInvalid", I1113 23:45:42.839] Message: "Invalid value: \"kubernetes.io/azure-disk\": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')", I1113 23:45:42.839] Field: "spec.csi.driver", I1113 23:45:42.840] }, I1113 23:45:42.840] ], I1113 23:45:42.840] RetryAfterSeconds: 0, I1113 23:45:42.840] }, I1113 23:45:42.840] Code: 422, I1113 23:45:42.840] }, I1113 23:45:42.840] } I1113 23:45:42.841] PersistentVolume "azuredisk-7097-kubernetes.io-azure-disk-preprovsioned-pv-dgk6x" is invalid: spec.csi.driver: Invalid value: "kubernetes.io/azure-disk": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') I1113 23:45:42.841] occurred I1113 23:45:42.841] I1113 23:45:42.841] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:174 I1113 23:45:42.841] ------------------------------ I1113 23:45:42.841] 2019/11/13 23:45:42 Uninstalling Azure Disk CSI Driver... I1113 23:45:42.842] make[1]: Entering directory '/go/src/sigs.k8s.io/azuredisk-csi-driver' I1113 23:45:42.842] helm delete --purge azuredisk-csi-driver I1113 23:45:48.433] release "azuredisk-csi-driver" deleted I1113 23:45:48.436] make[1]: Leaving directory '/go/src/sigs.k8s.io/azuredisk-csi-driver' I1113 23:45:48.436] 2019/11/13 23:45:48 Azure Disk CSI Driver uninstalled I1113 23:45:48.436] I1113 23:45:48.437] I1113 23:45:48.437] Summarizing 3 Failures: I1113 23:45:48.437] I1113 23:45:48.437] [Panic!] Dynamic Provisioning [It] [env] should retain PV with reclaimPolicy "Retain" I1113 23:45:48.437] /usr/local/go/src/runtime/panic.go:199 I1113 23:45:48.437] I1113 23:45:48.437] [Fail] [azuredisk-csi-e2e] [single-az] Pre-Provisioned [It] [env] should use a pre-provisioned volume and mount it as readOnly in a pod I1113 23:45:48.438] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:174 I1113 23:45:48.438] I1113 23:45:48.438] [Fail] [azuredisk-csi-e2e] [single-az] Pre-Provisioned [It] [env] should use a pre-provisioned volume and retain PV with reclaimPolicy "Retain" I1113 23:45:48.438] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:174 I1113 23:45:48.438] I1113 23:45:48.438] Ran 9 of 9 Specs in 1056.350 seconds I1113 23:45:48.438] FAIL! -- 6 Passed | 3 Failed | 0 Pending | 0 Skipped I1113 23:45:48.439] --- FAIL: TestE2E (1056.35s) I1113 23:45:48.439] FAIL I1113 23:45:48.440] FAIL sigs.k8s.io/azuredisk-csi-driver/test/e2e 1056.397s I1113 23:45:48.446] FAIL W1113 23:45:48.718] make: *** [Makefile:59: e2e-test] Error 1 W1113 23:45:48.718] 2019/11/13 23:45:48 process.go:155: Step 'make e2e-test' finished in 19m2.141488625s W1113 23:45:48.718] 2019/11/13 23:45:48 azure.go:922: Skippng DumpClusterLogs due to CCM not being enabled. W1113 23:45:48.718] 2019/11/13 23:45:48 azure.go:916: Deleting resource group: kubetest-5b2f78cb-066a-11ea-8c13-0242df6be97d. W1113 23:53:34.377] 2019/11/13 23:53:34 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml. W1113 23:53:34.377] 2019/11/13 23:53:34 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" W1113 23:53:34.821] 2019/11/13 23:53:34 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 444.354158ms W1113 23:53:34.822] 2019/11/13 23:53:34 main.go:319: Something went wrong: encountered 1 errors: [error during make e2e-test: exit status 2] W1113 23:53:34.824] Traceback (most recent call last): W1113 23:53:34.824] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in W1113 23:53:34.824] main(parse_args()) W1113 23:53:34.825] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main W1113 23:53:34.825] mode.start(runner_args) W1113 23:53:34.825] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start W1113 23:53:34.825] check_env(env, self.command, *args) W1113 23:53:34.825] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env W1113 23:53:34.825] subprocess.check_call(cmd, env=env) W1113 23:53:34.825] File "/usr/lib/python2.7/subprocess.py", line 190, in check_call W1113 23:53:34.826] raise CalledProcessError(retcode, cmd) W1113 23:53:34.826] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=host-go', '--up', '--down', '--test', '--deployment=acsengine', '--provider=skeleton', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--ginkgo-parallel=1', '--acsengine-agentpoolcount=2', '--acsengine-admin-username=azureuser', '--acsengine-creds=/etc/azure-cred/credentials', '--acsengine-orchestratorRelease=1.16', '--acsengine-mastervmsize=Standard_DS2_v2', '--acsengine-agentvmsize=Standard_D4s_v3', '--acsengine-hyperkube=True', '--acsengine-location=westus2', '--acsengine-public-key=/etc/azure-ssh/azure-ssh-pub', '--acsengine-template-url=https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/test/e2e/manifest/in-tree-vmss.json', '--acsengine-download-url=https://chuwontest.blob.core.windows.net/aks-engine/aks-engine-8cf7c2a74-linux-amd64.tar.gz', '--test-azure-disk-csi-driver=True', '--timeout=420m')' returned non-zero exit status 1 E1113 23:53:34.830] Command failed I1113 23:53:34.830] process 679 exited with code 1 after 46.2m E1113 23:53:34.830] FAIL: chewong-pull-in-tree-azure-disk-e2e-vmss I1113 23:53:34.831] Call: gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json W1113 23:53:35.357] Activated service account credentials for: [kubernetes-upstream@kubernetes-upstream.iam.gserviceaccount.com] I1113 23:53:35.393] process 25961 exited with code 0 after 0.0m I1113 23:53:35.393] Call: gcloud config get-value account I1113 23:53:35.639] process 25973 exited with code 0 after 0.0m I1113 23:53:35.640] Will upload results to gs://kubernetes-upstream/pr-logs using kubernetes-upstream@kubernetes-upstream.iam.gserviceaccount.com I1113 23:53:35.640] Upload result and artifacts... I1113 23:53:35.640] Gubernator results at https://gubernator.k8s.io/build/kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e-vmss/1194753232421785601 I1113 23:53:35.640] Call: gsutil ls gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e-vmss/1194753232421785601/artifacts W1113 23:53:36.486] CommandException: One or more URLs matched no objects. E1113 23:53:36.566] Command failed I1113 23:53:36.566] process 25985 exited with code 1 after 0.0m W1113 23:53:36.566] Remote dir gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e-vmss/1194753232421785601/artifacts not exist yet I1113 23:53:36.566] Call: gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e-vmss/1194753232421785601/artifacts I1113 23:53:38.096] process 26129 exited with code 0 after 0.0m I1113 23:53:38.096] Call: git rev-parse HEAD I1113 23:53:38.099] process 26658 exited with code 0 after 0.0m I1113 23:53:38.099] Call: git rev-parse HEAD I1113 23:53:38.103] process 26659 exited with code 0 after 0.0m I1113 23:53:38.104] Call: gsutil stat gs://kubernetes-upstream/pr-logs/directory/chewong-pull-in-tree-azure-disk-e2e-vmss/jobResultsCache.json W1113 23:53:38.869] No URLs matched: gs://kubernetes-upstream/pr-logs/directory/chewong-pull-in-tree-azure-disk-e2e-vmss/jobResultsCache.json I1113 23:53:38.869] process 26660 exited with code 1 after 0.0m I1113 23:53:38.870] Call: gsutil -q -h Content-Type:application/json -h x-goog-if-generation-match:0 cp /tmp/gsutil_NMSyGw gs://kubernetes-upstream/pr-logs/directory/chewong-pull-in-tree-azure-disk-e2e-vmss/jobResultsCache.json I1113 23:53:40.105] process 26804 exited with code 0 after 0.0m I1113 23:53:40.106] Call: gsutil stat gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e-vmss/jobResultsCache.json W1113 23:53:40.954] No URLs matched: gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e-vmss/jobResultsCache.json I1113 23:53:40.954] process 26984 exited with code 1 after 0.0m I1113 23:53:40.955] Call: gsutil -q -h Content-Type:application/json -h x-goog-if-generation-match:0 cp /tmp/gsutil_IEXF7D gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e-vmss/jobResultsCache.json I1113 23:53:42.186] process 27128 exited with code 0 after 0.0m I1113 23:53:42.186] Call: gsutil -q -h Content-Type:application/json cp /tmp/gsutil_HSXIXX gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e-vmss/1194753232421785601/finished.json I1113 23:53:43.415] process 27308 exited with code 0 after 0.0m I1113 23:53:43.415] Call: gsutil -q -h Content-Type:text/plain -h 'Cache-Control:private, max-age=0, no-transform' cp /tmp/gsutil_RP2Fhw gs://kubernetes-upstream/pr-logs/directory/chewong-pull-in-tree-azure-disk-e2e-vmss/latest-build.txt I1113 23:53:44.493] process 27488 exited with code 0 after 0.0m I1113 23:53:44.494] Call: gsutil -q -h Content-Type:text/plain -h 'Cache-Control:private, max-age=0, no-transform' cp /tmp/gsutil_TC2wCH gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e-vmss/latest-build.txt I1113 23:53:45.665] process 27668 exited with code 0 after 0.0m