W1113 21:45:49.285] ************************************************************************** bootstrap.py is deprecated! test-infra oncall does not support any job still using bootstrap.py. Please migrate your job to podutils! https://github.com/kubernetes/test-infra/blob/master/prow/pod-utilities.md ************************************************************************** I1113 21:45:49.285] Args: --job=chewong-pull-in-tree-azure-disk-e2e --service-account=/etc/service-account/service-account.json --upload=gs://kubernetes-jenkins/logs --job=chewong-pull-in-tree-azure-disk-e2e --root=/go/src --service-account=/etc/service-account/service-account.json --repo=k8s.io/kubernetes=release-1.16 --repo=sigs.k8s.io/azuredisk-csi-driver=master:a6a3e9b4c8eb15f983b9cfc34935ebd72f18b698,188:463a1299807122c7d9e078a5bb09bb13492c20cf --upload=gs://kubernetes-upstream/pr-logs --timeout=460 --scenario=kubernetes_e2e -- --gce-ssh= --test=true --up=true --down=true --deployment=acsengine --build=bazel --provider=skeleton --ginkgo-parallel=1 --acsengine-agentpoolcount=2 --acsengine-admin-username=azureuser '--acsengine-creds=$AZURE_CREDENTIALS' --acsengine-orchestratorRelease=1.16 --acsengine-mastervmsize=Standard_DS2_v2 --acsengine-agentvmsize=Standard_DS2_v2 --acsengine-hyperkube=True --acsengine-location=westus2 '--acsengine-public-key=$AZURE_SSH_PUBLIC_KEY_FILE' --acsengine-template-url=https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/00938364f6eef541bfee734c314ffb08dcd4ffbd/test/e2e/manifest/in-tree-vmss.json --acsengine-download-url=https://github.com/Azure/aks-engine/releases/download/v0.39.1/aks-engine-v0.39.1-linux-amd64.tar.gz --test-azure-disk-csi-driver=True --timeout=420m I1113 21:45:49.285] Bootstrap chewong-pull-in-tree-azure-disk-e2e... I1113 21:45:49.289] Builder: d945797b-065e-11ea-918b-16773c0c7aed I1113 21:45:49.289] Image: gcr.io/k8s-testimages/kubekins-e2e:v20191112-9f04410-1.16 I1113 21:45:49.289] Gubernator results at https://gubernator.k8s.io/build/kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e/1194732973954437120 I1113 21:45:49.289] Call: gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json W1113 21:45:49.747] Activated service account credentials for: [kubernetes-upstream@kubernetes-upstream.iam.gserviceaccount.com] I1113 21:45:49.776] process 194 exited with code 0 after 0.0m I1113 21:45:49.776] Call: gcloud config get-value account I1113 21:45:49.990] process 206 exited with code 0 after 0.0m I1113 21:45:49.990] Will upload results to gs://kubernetes-upstream/pr-logs using kubernetes-upstream@kubernetes-upstream.iam.gserviceaccount.com I1113 21:45:49.990] Root: /go/src I1113 21:45:49.990] cd to /go/src I1113 21:45:49.991] Checkout: /go/src/k8s.io/kubernetes release-1.16 to /go/src/k8s.io/kubernetes I1113 21:45:49.991] Call: git init k8s.io/kubernetes I1113 21:45:49.994] Initialized empty Git repository in /go/src/k8s.io/kubernetes/.git/ I1113 21:45:49.994] process 218 exited with code 0 after 0.0m I1113 21:45:49.995] Call: git config --local user.name 'K8S Bootstrap' I1113 21:45:49.997] process 219 exited with code 0 after 0.0m I1113 21:45:49.997] Call: git config --local user.email k8s_bootstrap@localhost I1113 21:45:49.999] process 220 exited with code 0 after 0.0m I1113 21:45:49.999] Call: git fetch --quiet --tags https://github.com/kubernetes/kubernetes release-1.16 I1113 21:46:47.361] process 221 exited with code 0 after 1.0m I1113 21:46:47.361] Call: git checkout -B test FETCH_HEAD W1113 21:46:49.222] Switched to a new branch 'test' I1113 21:46:49.231] process 233 exited with code 0 after 0.0m I1113 21:46:49.231] Call: git show -s --format=format:%ct HEAD I1113 21:46:49.234] process 234 exited with code 0 after 0.0m I1113 21:46:49.234] Checkout: /go/src/sigs.k8s.io/azuredisk-csi-driver master:a6a3e9b4c8eb15f983b9cfc34935ebd72f18b698,188:463a1299807122c7d9e078a5bb09bb13492c20cf to /go/src/sigs.k8s.io/azuredisk-csi-driver I1113 21:46:49.234] Call: git init sigs.k8s.io/azuredisk-csi-driver I1113 21:46:49.238] Initialized empty Git repository in /go/src/sigs.k8s.io/azuredisk-csi-driver/.git/ I1113 21:46:49.238] process 235 exited with code 0 after 0.0m I1113 21:46:49.238] Call: git config --local user.name 'K8S Bootstrap' I1113 21:46:49.240] process 236 exited with code 0 after 0.0m I1113 21:46:49.240] Call: git config --local user.email k8s_bootstrap@localhost I1113 21:46:49.242] process 237 exited with code 0 after 0.0m I1113 21:46:49.243] Call: git fetch --quiet --tags https://github.com/kubernetes-sigs/azuredisk-csi-driver master +refs/pull/188/head:refs/pr/188 I1113 21:46:55.685] process 238 exited with code 0 after 0.1m I1113 21:46:55.686] Call: git checkout -B test a6a3e9b4c8eb15f983b9cfc34935ebd72f18b698 W1113 21:46:56.207] Switched to a new branch 'test' I1113 21:46:56.210] process 249 exited with code 0 after 0.0m I1113 21:46:56.210] Call: git show -s --format=format:%ct HEAD I1113 21:46:56.213] process 250 exited with code 0 after 0.0m I1113 21:46:56.213] Call: git merge --no-ff -m 'Merge +refs/pull/188/head:refs/pr/188' 463a1299807122c7d9e078a5bb09bb13492c20cf I1113 21:46:56.583] Merge made by the 'recursive' strategy. I1113 21:46:56.586] ...{safe_mounter_linux.go => safe_mounter_unix.go} | 2 +- I1113 21:46:56.586] test/e2e/README.md | 9 +- I1113 21:46:56.586] test/e2e/driver/azuredisk_csi_driver.go | 23 +++- I1113 21:46:56.586] test/e2e/suite_test.go | 119 +++++++++++---------- I1113 21:46:56.586] 4 files changed, 89 insertions(+), 64 deletions(-) I1113 21:46:56.586] rename pkg/mounter/{safe_mounter_linux.go => safe_mounter_unix.go} (97%) I1113 21:46:56.587] process 251 exited with code 0 after 0.0m I1113 21:46:56.587] Configure environment... I1113 21:46:56.587] Call: git show -s --format=format:%ct HEAD I1113 21:46:56.590] process 267 exited with code 0 after 0.0m I1113 21:46:56.590] Call: gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json W1113 21:46:57.150] Activated service account credentials for: [kubernetes-upstream@kubernetes-upstream.iam.gserviceaccount.com] I1113 21:46:57.252] process 268 exited with code 0 after 0.0m I1113 21:46:57.252] Call: gcloud config get-value account I1113 21:46:57.464] process 280 exited with code 0 after 0.0m I1113 21:46:57.465] Will upload results to gs://kubernetes-upstream/pr-logs using kubernetes-upstream@kubernetes-upstream.iam.gserviceaccount.com I1113 21:46:57.465] Call: bash -c ' set -o errexit set -o nounset export KUBE_ROOT=. source hack/lib/version.sh kube::version::get_version_vars echo $KUBE_GIT_VERSION ' I1113 21:46:57.770] process 292 exited with code 0 after 0.0m I1113 21:46:57.771] Start 1194732973954437120 at v1.16.4-beta.0.1+d70a3ca08fe72a... I1113 21:46:57.773] Call: gsutil -q -h Content-Type:application/json cp /tmp/gsutil_iPnm2g gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e/1194732973954437120/started.json I1113 21:46:58.949] process 325 exited with code 0 after 0.0m I1113 21:46:58.949] Call: gsutil -q -h Content-Type:text/plain -h 'x-goog-meta-link: gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e/1194732973954437120' cp /tmp/gsutil_r0AcSp gs://kubernetes-upstream/pr-logs/directory/chewong-pull-in-tree-azure-disk-e2e/1194732973954437120.txt I1113 21:47:00.041] process 505 exited with code 0 after 0.0m I1113 21:47:00.042] Call: /workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py --gce-ssh= --test=true --up=true --down=true --deployment=acsengine --build=bazel --provider=skeleton --ginkgo-parallel=1 --acsengine-agentpoolcount=2 --acsengine-admin-username=azureuser --acsengine-creds=/etc/azure-cred/credentials --acsengine-orchestratorRelease=1.16 --acsengine-mastervmsize=Standard_DS2_v2 --acsengine-agentvmsize=Standard_DS2_v2 --acsengine-hyperkube=True --acsengine-location=westus2 --acsengine-public-key=/etc/azure-ssh/azure-ssh-pub --acsengine-template-url=https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/00938364f6eef541bfee734c314ffb08dcd4ffbd/test/e2e/manifest/in-tree-vmss.json --acsengine-download-url=https://github.com/Azure/aks-engine/releases/download/v0.39.1/aks-engine-v0.39.1-linux-amd64.tar.gz --test-azure-disk-csi-driver=True --timeout=420m W1113 21:47:00.075] starts with local mode W1113 21:47:00.076] Environment: W1113 21:47:00.076] ARTIFACTS=/workspace/_artifacts W1113 21:47:00.076] AWS_SSH_PRIVATE_KEY_FILE=/root/.ssh/kube_aws_rsa W1113 21:47:00.076] AWS_SSH_PUBLIC_KEY_FILE=/root/.ssh/kube_aws_rsa.pub W1113 21:47:00.076] AZURE_CREDENTIALS=/etc/azure-cred/credentials W1113 21:47:00.076] AZURE_SSH_PUBLIC_KEY_FILE=/etc/azure-ssh/azure-ssh-pub W1113 21:47:00.076] AZURE_STORAGE_DRIVER=kubernetes.io/azure-disk W1113 21:47:00.076] AZ_STORAGE_CONTAINER_NAME=mystoragecontainer W1113 21:47:00.077] BAZEL_REMOTE_CACHE_ENABLED=false W1113 21:47:00.077] BAZEL_VERSION=0.23.2 W1113 21:47:00.077] BOOTSTRAP_MIGRATION=yes W1113 21:47:00.077] BUILD_ID=1194732973954437120 W1113 21:47:00.077] BUILD_NUMBER=1194732973954437120 W1113 21:47:00.077] CLOUDSDK_COMPONENT_MANAGER_DISABLE_UPDATE_CHECK=true W1113 21:47:00.077] CLOUDSDK_CONFIG=/go/src/k8s.io/kubernetes/.config/gcloud W1113 21:47:00.077] CLOUDSDK_CORE_DISABLE_PROMPTS=1 W1113 21:47:00.077] CLOUDSDK_EXPERIMENTAL_FAST_COMPONENT_UPDATE=false W1113 21:47:00.078] CLOUD_CONFIG=random W1113 21:47:00.078] DECK_PORT=tcp://10.0.160.83:80 W1113 21:47:00.078] DECK_PORT_80_TCP=tcp://10.0.160.83:80 W1113 21:47:00.078] DECK_PORT_80_TCP_ADDR=10.0.160.83 W1113 21:47:00.078] DECK_PORT_80_TCP_PORT=80 W1113 21:47:00.078] DECK_PORT_80_TCP_PROTO=tcp W1113 21:47:00.078] DECK_SERVICE_HOST=10.0.160.83 W1113 21:47:00.078] DECK_SERVICE_PORT=80 W1113 21:47:00.078] DOCKER_IN_DOCKER_ENABLED=true W1113 21:47:00.078] DOCKER_IN_DOCKER_IPV6_ENABLED=false W1113 21:47:00.079] GCS_ARTIFACTS_DIR=gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e/1194732973954437120/artifacts W1113 21:47:00.079] GOOGLE_APPLICATION_CREDENTIALS=/etc/service-account/service-account.json W1113 21:47:00.079] GOPATH=/go W1113 21:47:00.079] GO_TARBALL=go1.12.12.linux-amd64.tar.gz W1113 21:47:00.079] HOME=/workspace W1113 21:47:00.079] HOOK_PORT=tcp://10.0.144.133:8888 W1113 21:47:00.079] HOOK_PORT_8888_TCP=tcp://10.0.144.133:8888 W1113 21:47:00.079] HOOK_PORT_8888_TCP_ADDR=10.0.144.133 W1113 21:47:00.079] HOOK_PORT_8888_TCP_PORT=8888 W1113 21:47:00.079] HOOK_PORT_8888_TCP_PROTO=tcp W1113 21:47:00.080] HOOK_SERVICE_HOST=10.0.144.133 W1113 21:47:00.080] HOOK_SERVICE_PORT=8888 W1113 21:47:00.080] HOSTNAME=d945797b-065e-11ea-918b-16773c0c7aed W1113 21:47:00.080] IMAGE=gcr.io/k8s-testimages/kubekins-e2e:v20191112-9f04410-1.16 W1113 21:47:00.080] INSTANCE_PREFIX=bootstrap-e2e W1113 21:47:00.080] JENKINS_GCE_SSH_PRIVATE_KEY_FILE=/root/.ssh/google_compute_engine W1113 21:47:00.080] JENKINS_GCE_SSH_PUBLIC_KEY_FILE=/root/.ssh/google_compute_engine.pub W1113 21:47:00.080] JOB_NAME=chewong-pull-in-tree-azure-disk-e2e W1113 21:47:00.081] JOB_SPEC={"type":"presubmit","job":"chewong-pull-in-tree-azure-disk-e2e","buildid":"1194732973954437120","prowjobid":"d945797b-065e-11ea-918b-16773c0c7aed","refs":{"org":"Azure","repo":"kubernetes","repo_link":"https://github.com/Azure/kubernetes","base_ref":"master","base_sha":"f501d8e59a599eaad45a50ee1da075cedc9d0ab9","base_link":"https://github.com/Azure/kubernetes/commit/f501d8e59a599eaad45a50ee1da075cedc9d0ab9","pulls":[{"number":113,"author":"chewong","sha":"879465972041635fd7bc8269da49153b67e63822","link":"https://github.com/Azure/kubernetes/pull/113","commit_link":"https://github.com/Azure/kubernetes/pull/113/commits/879465972041635fd7bc8269da49153b67e63822","author_link":"https://github.com/chewong"}]}} W1113 21:47:00.081] JOB_TYPE=presubmit W1113 21:47:00.081] KUBERNETES_PORT=tcp://10.0.0.1:443 W1113 21:47:00.081] KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443 W1113 21:47:00.081] KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1 W1113 21:47:00.081] KUBERNETES_PORT_443_TCP_PORT=443 W1113 21:47:00.081] KUBERNETES_PORT_443_TCP_PROTO=tcp W1113 21:47:00.081] KUBERNETES_SERVICE_HOST=10.0.0.1 W1113 21:47:00.081] KUBERNETES_SERVICE_PORT=443 W1113 21:47:00.081] KUBERNETES_SERVICE_PORT_HTTPS=443 W1113 21:47:00.082] KUBETEST_IN_DOCKER=true W1113 21:47:00.082] KUBETEST_MANUAL_DUMP=y W1113 21:47:00.082] KUBE_AWS_INSTANCE_PREFIX=bootstrap-e2e W1113 21:47:00.082] KUBE_GCE_INSTANCE_PREFIX=bootstrap-e2e W1113 21:47:00.082] KUBE_SSH_KEY_PATH=/etc/ssh-key-secret/ssh-private W1113 21:47:00.082] KUBE_SSH_PUBLIC_KEY_PATH=/etc/ssh-key-secret/ssh-public W1113 21:47:00.082] KUBE_SSH_USER=azureuser W1113 21:47:00.082] KUBE_VERBOSE=0 W1113 21:47:00.082] NODE_NAME=d945797b-065e-11ea-918b-16773c0c7aed W1113 21:47:00.082] PATH=/go/bin:/go/bin:/usr/local/go/bin:/google-cloud-sdk/bin:/workspace:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin W1113 21:47:00.083] PROW_JOB_ID=d945797b-065e-11ea-918b-16773c0c7aed W1113 21:47:00.083] PULL_BASE_REF=master W1113 21:47:00.083] PULL_BASE_SHA=f501d8e59a599eaad45a50ee1da075cedc9d0ab9 W1113 21:47:00.083] PULL_NUMBER=113 W1113 21:47:00.083] PULL_PULL_SHA=879465972041635fd7bc8269da49153b67e63822 W1113 21:47:00.083] PULL_REFS=master:f501d8e59a599eaad45a50ee1da075cedc9d0ab9,113:879465972041635fd7bc8269da49153b67e63822 W1113 21:47:00.083] PWD=/workspace W1113 21:47:00.083] REGISTRY=k8sprowinternal.azurecr.io W1113 21:47:00.083] REPO_NAME=kubernetes W1113 21:47:00.083] REPO_OWNER=Azure W1113 21:47:00.084] SHLVL=2 W1113 21:47:00.084] SOURCE_DATE_EPOCH=1573647025 W1113 21:47:00.084] TERM=xterm W1113 21:47:00.084] TIDE_PORT=tcp://10.0.180.53:80 W1113 21:47:00.084] TIDE_PORT_80_TCP=tcp://10.0.180.53:80 W1113 21:47:00.084] TIDE_PORT_80_TCP_ADDR=10.0.180.53 W1113 21:47:00.084] TIDE_PORT_80_TCP_PORT=80 W1113 21:47:00.084] TIDE_PORT_80_TCP_PROTO=tcp W1113 21:47:00.084] TIDE_SERVICE_HOST=10.0.180.53 W1113 21:47:00.084] TIDE_SERVICE_PORT=80 W1113 21:47:00.084] WORKSPACE=/workspace W1113 21:47:00.085] _=./test-infra/jenkins/bootstrap.py W1113 21:47:00.085] Run: ('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--up', '--down', '--test', '--deployment=acsengine', '--provider=skeleton', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--ginkgo-parallel=1', '--acsengine-agentpoolcount=2', '--acsengine-admin-username=azureuser', '--acsengine-creds=/etc/azure-cred/credentials', '--acsengine-orchestratorRelease=1.16', '--acsengine-mastervmsize=Standard_DS2_v2', '--acsengine-agentvmsize=Standard_DS2_v2', '--acsengine-hyperkube=True', '--acsengine-location=westus2', '--acsengine-public-key=/etc/azure-ssh/azure-ssh-pub', '--acsengine-template-url=https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/00938364f6eef541bfee734c314ffb08dcd4ffbd/test/e2e/manifest/in-tree-vmss.json', '--acsengine-download-url=https://github.com/Azure/aks-engine/releases/download/v0.39.1/aks-engine-v0.39.1-linux-amd64.tar.gz', '--test-azure-disk-csi-driver=True', '--timeout=420m') W1113 21:47:00.098] 2019/11/13 21:47:00 main.go:332: Limiting testing to 7h0m0s W1113 21:47:00.101] 2019/11/13 21:47:00 azure.go:232: Reading credentials file /etc/azure-cred/credentials W1113 21:47:00.101] 2019/11/13 21:47:00 process.go:153: Running: make -C /go/src/k8s.io/kubernetes bazel-release W1113 21:47:00.127] Extracting Bazel installation... I1113 21:47:00.227] make: Entering directory '/go/src/k8s.io/kubernetes' W1113 21:47:08.533] Starting local Bazel server and connecting to it... W1113 21:47:10.080] Loading: W1113 21:47:10.084] Loading: 0 packages loaded W1113 21:47:11.086] Loading: 0 packages loaded W1113 21:47:12.686] Loading: 0 packages loaded W1113 21:47:14.088] Loading: 0 packages loaded W1113 21:47:16.007] Loading: 0 packages loaded W1113 21:47:17.088] Loading: 0 packages loaded W1113 21:47:17.088] currently loading: build/release-tars W1113 21:47:19.186] Loading: 0 packages loaded W1113 21:47:19.186] currently loading: build/release-tars W1113 21:47:19.539] Analyzing: target //build/release-tars:release-tars (1 packages loaded, 0 targets configured) W1113 21:47:20.985] Analyzing: target //build/release-tars:release-tars (187 packages loaded, 1037 targets configured) W1113 21:47:22.643] Analyzing: target //build/release-tars:release-tars (1064 packages loaded, 3729 targets configured) W1113 21:47:24.560] Analyzing: target //build/release-tars:release-tars (1863 packages loaded, 10725 targets configured) W1113 21:47:26.752] Analyzing: target //build/release-tars:release-tars (2542 packages loaded, 22837 targets configured) W1113 21:47:30.215] Analyzing: target //build/release-tars:release-tars (3032 packages loaded, 27379 targets configured) W1113 21:47:33.328] Analyzing: target //build/release-tars:release-tars (3067 packages loaded, 34115 targets configured) W1113 21:47:36.864] Analyzing: target //build/release-tars:release-tars (3077 packages loaded, 35290 targets configured) W1113 21:47:43.668] INFO: Analysed target //build/release-tars:release-tars (3077 packages loaded, 36864 targets configured). W1113 21:47:43.670] Building: checking cached actions W1113 21:47:43.671] INFO: Found 1 target... W1113 21:47:43.814] [0 / 11] [-----] BazelWorkspaceStatusAction stable-status.txt W1113 21:47:49.269] [82 / 4,422] GUNZIP external/debian-iptables-amd64/image/000.tar.gz.nogz; 3s linux-sandbox ... (8 actions, 7 running) W1113 21:47:55.913] [110 / 4,422] GoStdlib external/io_bazel_rules_go/linux_amd64_pure_stripped/stdlib%/pkg; 9s linux-sandbox ... (8 actions running) W1113 21:48:04.106] [134 / 4,422] GoStdlib external/io_bazel_rules_go/linux_amd64_pure_stripped/stdlib%/pkg; 17s linux-sandbox ... (8 actions running) W1113 21:48:12.230] [198 / 4,422] GoStdlib external/io_bazel_rules_go/linux_amd64_pure_stripped/stdlib%/pkg; 25s linux-sandbox ... (8 actions running) W1113 21:48:21.623] [265 / 4,422] GoStdlib external/io_bazel_rules_go/linux_amd64_pure_stripped/stdlib%/pkg; 35s linux-sandbox ... (8 actions running) W1113 21:48:32.529] [350 / 4,422] GoCompilePkg vendor/github.com/vmware/govmomi/vim25/types/linux_amd64_stripped/go_default_library%/k8s.io/kubernetes/vendor/github.com/vmware/govmomi/vim25/types.a; 10s linux-sandbox ... (8 actions, 5 running) W1113 21:48:45.095] [440 / 4,422] GoCompilePkg vendor/golang.org/x/net/http2/linux_amd64_stripped/go_default_library%/k8s.io/kubernetes/vendor/golang.org/x/net/http2.a; 3s linux-sandbox ... (8 actions, 7 running) W1113 21:48:59.665] [573 / 4,422] GoCompilePkg vendor/github.com/vmware/govmomi/vim25/types/linux_amd64_pure_stripped/go_default_library%/k8s.io/kubernetes/vendor/github.com/vmware/govmomi/vim25/types.a; 5s linux-sandbox ... (8 actions running) W1113 21:49:16.177] [641 / 4,422] GoCompilePkg vendor/github.com/gorilla/websocket/linux_amd64_stripped/go_default_library%/k8s.io/kubernetes/vendor/github.com/gorilla/websocket.a; 2s linux-sandbox ... (8 actions, 7 running) W1113 21:49:36.499] [795 / 4,422] GoCompilePkg vendor/github.com/gophercloud/gophercloud/openstack/linux_amd64_pure_stripped/go_default_library%/k8s.io/kubernetes/vendor/github.com/gophercloud/gophercloud/openstack.a; 2s linux-sandbox ... (7 actions, 6 running) W1113 21:49:58.570] [996 / 4,422] GoCompilePkg vendor/github.com/go-openapi/spec/linux_amd64_pure_stripped/go_default_library%/k8s.io/kubernetes/vendor/github.com/go-openapi/spec.a; 0s linux-sandbox ... (8 actions, 7 running) W1113 21:50:24.405] [1,257 / 4,422] GoCompilePkg vendor/golang.org/x/tools/go/packages/linux_amd64_stripped/go_default_library%/k8s.io/kubernetes/vendor/golang.org/x/tools/go/packages.a [for host]; 1s linux-sandbox ... (8 actions, 7 running) W1113 21:50:56.470] [1,403 / 4,422] GoPath pkg/generated/openapi/zz_generated.openapi~gopath/src/k8s.io/kubernetes/vendor/github.com/go-openapi/spec/bindata.go; 7s linux-sandbox ... (8 actions, 7 running) W1113 21:51:31.625] [1,512 / 4,422] ; 31s linux-sandbox ... (8 actions, 7 running) W1113 21:52:11.687] [1,673 / 4,422] GoCompilePkg staging/src/k8s.io/api/scheduling/v1alpha1/linux_amd64_pure_stripped/go_default_library%/k8s.io/kubernetes/vendor/k8s.io/api/scheduling/v1alpha1.a; 2s linux-sandbox ... (8 actions, 7 running) W1113 21:52:58.214] [2,023 / 4,422] GoCompilePkg vendor/github.com/vmware/govmomi/vim25/methods/linux_amd64_pure_stripped/go_default_library%/k8s.io/kubernetes/vendor/github.com/vmware/govmomi/vim25/methods.a; 4s linux-sandbox ... (8 actions, 7 running) W1113 21:53:51.318] [2,376 / 4,422] GoCompilePkg staging/src/k8s.io/client-go/kubernetes/typed/batch/v1beta1/fake/linux_amd64_pure_stripped/go_default_library%/k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/batch/v1beta1/fake.a; 1s linux-sandbox ... (8 actions, 7 running) W1113 21:54:51.614] [2,765 / 4,422] GoCompilePkg staging/src/k8s.io/client-go/informers/linux_amd64_pure_stripped/go_default_library%/k8s.io/kubernetes/vendor/k8s.io/client-go/informers.a; 2s linux-sandbox ... (8 actions, 7 running) W1113 21:56:00.874] [3,073 / 4,422] ; 18s linux-sandbox ... (8 actions, 7 running) W1113 21:57:21.492] [3,480 / 4,422] GoCompilePkg cmd/kube-controller-manager/app/linux_amd64_pure_stripped/go_default_library%/k8s.io/kubernetes/cmd/kube-controller-manager/app.a; 9s linux-sandbox ... (8 actions, 7 running) W1113 21:58:53.240] [3,925 / 4,422] GZIP build/kube-controller-manager-internal-layer.tar.gz; 15s linux-sandbox ... (8 actions, 7 running) W1113 22:00:39.000] [4,211 / 4,423] PackageTar build/release-tars/kubernetes-src.tar.gz; 31s linux-sandbox ... (8 actions, 7 running) W1113 22:02:45.305] [4,409 / 4,423] GoLink cmd/genkubedocs/genkubedocs; 51s linux-sandbox ... (6 actions running) W1113 22:05:06.076] [4,418 / 4,423] PackageTar build/release-tars/kubernetes-server-linux-amd64.tar.gz; 0s linux-sandbox ... (3 actions, 1 running) W1113 22:05:06.076] [4,418 / 4,423] PackageTar build/release-tars/kubernetes-server-linux-amd64.tar.gz; 0s linux-sandbox ... (3 actions, 1 running) W1113 22:06:21.158] Target //build/release-tars:release-tars up-to-date: W1113 22:06:21.160] bazel-bin/build/release-tars/kubernetes-client-linux-amd64.tar.gz W1113 22:06:21.160] bazel-bin/build/release-tars/kubernetes-node-linux-amd64.tar.gz W1113 22:06:21.160] bazel-bin/build/release-tars/kubernetes-server-linux-amd64.tar.gz W1113 22:06:21.160] bazel-bin/build/release-tars/kubernetes-manifests.tar.gz W1113 22:06:21.160] bazel-bin/build/release-tars/kubernetes-test-portable.tar.gz W1113 22:06:21.161] bazel-bin/build/release-tars/kubernetes-test-linux-amd64.tar.gz W1113 22:06:21.161] bazel-bin/build/release-tars/kubernetes-test.tar.gz W1113 22:06:21.161] bazel-bin/build/release-tars/kubernetes.tar.gz W1113 22:06:21.162] bazel-bin/build/release-tars/kubernetes-src.tar.gz W1113 22:06:21.187] INFO: Elapsed time: 1161.017s, Critical Path: 424.27s W1113 22:06:21.187] INFO: 4356 processes: 4356 linux-sandbox. W1113 22:06:21.191] INFO: Build completed successfully, 4423 total actions W1113 22:06:21.193] INFO: Build completed successfully, 4423 total actions W1113 22:06:21.245] 2019/11/13 22:06:21 process.go:155: Step 'make -C /go/src/k8s.io/kubernetes bazel-release' finished in 19m21.144084493s W1113 22:06:21.245] 2019/11/13 22:06:21 util.go:265: Flushing memory. I1113 22:06:21.346] make: Leaving directory '/go/src/k8s.io/kubernetes' W1113 22:06:51.572] 2019/11/13 22:06:51 azure.go:916: Deleting resource group: kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59. W1113 22:06:51.983] 2019/11/13 22:06:51 azure.go:654: CWD /go/src/k8s.io/kubernetes W1113 22:06:51.984] 2019/11/13 22:06:51 azure.go:673: Attempting Docker login with azure cred. W1113 22:06:53.546] 2019/11/13 22:06:53 azure.go:682: Docker login success. W1113 22:06:53.546] 2019/11/13 22:06:53 azure.go:727: Building hyperkube. W1113 22:06:53.546] 2019/11/13 22:06:53 process.go:153: Running: make -C /go/src/k8s.io/kubernetes WHAT=cmd/hyperkube W1113 22:07:24.584] found packages rand (cgo1.go) and rand2 (cgo2.go) in /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/bazel-kubernetes/external/go_sdk/doc/progs W1113 22:09:57.391] 2019/11/13 22:09:57 process.go:155: Step 'make -C /go/src/k8s.io/kubernetes WHAT=cmd/hyperkube' finished in 3m3.845354408s W1113 22:09:57.392] 2019/11/13 22:09:57 azure.go:745: Pushing hyperkube. W1113 22:09:57.392] 2019/11/13 22:09:57 process.go:153: Running: make -C /go/src/k8s.io/kubernetes/cluster/images/hyperkube push HYPERKUBE_BIN=/go/src/k8s.io/kubernetes/_output/bin/hyperkube I1113 22:09:57.492] make: Entering directory '/go/src/k8s.io/kubernetes/cluster/images/hyperkube' I1113 22:09:57.492] cp -r ./* /tmp/hyperkubeyhiM1o I1113 22:09:57.493] cp /go/src/k8s.io/kubernetes/_output/bin/hyperkube /tmp/hyperkubeyhiM1o I1113 22:09:57.568] chmod a+rx /tmp/hyperkubeyhiM1o/hyperkube I1113 22:09:57.570] cd /tmp/hyperkubeyhiM1o && sed -i.back "s|BASEIMAGE|k8s.gcr.io/debian-hyperkube-base-amd64:0.12.1|g" Dockerfile I1113 22:09:57.572] docker build --pull -t k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194732973954437120-e57aa4c1 /tmp/hyperkubeyhiM1o I1113 22:09:58.973] Sending build context to Docker daemon 212.1MB I1113 22:09:58.995] Step 1/2 : FROM k8s.gcr.io/debian-hyperkube-base-amd64:0.12.1 I1113 22:09:59.322] 0.12.1: Pulling from debian-hyperkube-base-amd64 I1113 22:09:59.323] 346aee5ea5bc: Pulling fs layer I1113 22:09:59.323] 9c47fde751af: Pulling fs layer I1113 22:09:59.323] be2693a52daf: Pulling fs layer I1113 22:09:59.323] 6b69eb11d043: Pulling fs layer I1113 22:09:59.323] 0abeb1500767: Pulling fs layer I1113 22:09:59.323] 4062d80041b7: Pulling fs layer I1113 22:09:59.324] 23b6daf06fc2: Pulling fs layer I1113 22:09:59.324] 0abeb1500767: Waiting I1113 22:09:59.324] 6b69eb11d043: Waiting I1113 22:09:59.324] 23b6daf06fc2: Waiting I1113 22:09:59.324] 4062d80041b7: Waiting I1113 22:09:59.524] 9c47fde751af: Verifying Checksum I1113 22:09:59.524] 9c47fde751af: Download complete I1113 22:09:59.557] be2693a52daf: Verifying Checksum I1113 22:09:59.557] be2693a52daf: Download complete I1113 22:09:59.661] 6b69eb11d043: Verifying Checksum I1113 22:09:59.661] 6b69eb11d043: Download complete I1113 22:09:59.750] 0abeb1500767: Download complete I1113 22:09:59.816] 346aee5ea5bc: Verifying Checksum I1113 22:09:59.816] 346aee5ea5bc: Download complete I1113 22:10:00.299] 23b6daf06fc2: Verifying Checksum I1113 22:10:00.299] 23b6daf06fc2: Download complete I1113 22:10:00.752] 4062d80041b7: Verifying Checksum I1113 22:10:00.753] 4062d80041b7: Download complete I1113 22:10:01.285] 346aee5ea5bc: Pull complete I1113 22:10:01.426] 9c47fde751af: Pull complete I1113 22:10:01.613] be2693a52daf: Pull complete I1113 22:10:01.754] 6b69eb11d043: Pull complete I1113 22:10:01.905] 0abeb1500767: Pull complete I1113 22:10:08.416] 4062d80041b7: Pull complete I1113 22:10:09.722] 23b6daf06fc2: Pull complete I1113 22:10:09.746] Digest: sha256:5d4ea2fb5fbe9a9a9da74f67cf2faefc881968bc39f2ac5d62d9167e575812a1 I1113 22:10:09.765] Status: Downloaded newer image for k8s.gcr.io/debian-hyperkube-base-amd64:0.12.1 I1113 22:10:09.765] ---> a46476511725 I1113 22:10:09.765] Step 2/2 : COPY hyperkube /hyperkube I1113 22:10:33.555] ---> 8a646d3814b8 I1113 22:10:33.582] Successfully built 8a646d3814b8 I1113 22:10:33.661] Successfully tagged k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194732973954437120-e57aa4c1 I1113 22:10:33.665] rm -rf "/tmp/hyperkubeyhiM1o" I1113 22:10:33.692] docker push k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194732973954437120-e57aa4c1 I1113 22:10:33.782] The push refers to repository [k8sprowinternal.azurecr.io/hyperkube-amd64] I1113 22:10:33.808] f7b03b3feb8c: Preparing I1113 22:10:33.808] 39b3ac6d96e9: Preparing I1113 22:10:33.808] 7bbae4dddb88: Preparing I1113 22:10:33.808] a00defcfe869: Preparing I1113 22:10:33.808] 2ab0ae805c74: Preparing I1113 22:10:33.809] 43a8fe7d2382: Preparing I1113 22:10:33.809] 3f6a6f542637: Preparing I1113 22:10:33.809] 5ba3be777c2d: Preparing I1113 22:10:33.809] 3f6a6f542637: Waiting I1113 22:10:33.809] 43a8fe7d2382: Waiting I1113 22:10:33.809] 5ba3be777c2d: Waiting I1113 22:10:33.994] a00defcfe869: Layer already exists I1113 22:10:34.015] 2ab0ae805c74: Layer already exists I1113 22:10:34.038] 39b3ac6d96e9: Layer already exists I1113 22:10:34.059] 7bbae4dddb88: Layer already exists I1113 22:10:34.080] 43a8fe7d2382: Layer already exists I1113 22:10:34.103] 3f6a6f542637: Layer already exists I1113 22:10:34.126] 5ba3be777c2d: Layer already exists I1113 22:10:43.180] f7b03b3feb8c: Pushed I1113 22:10:44.191] azure-e2e-1194732973954437120-e57aa4c1: digest: sha256:5a7f8927d1deb5559d3f4d09a6402d6f6b911abb0f5e9d83a83b0946a70b5f52 size: 1998 I1113 22:10:44.211] docker rmi k8sprowinternal.azurecr.io/hyperkube:azure-e2e-1194732973954437120-e57aa4c1 2>/dev/null || true I1113 22:10:44.259] docker tag k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194732973954437120-e57aa4c1 k8sprowinternal.azurecr.io/hyperkube:azure-e2e-1194732973954437120-e57aa4c1 I1113 22:10:44.329] docker push k8sprowinternal.azurecr.io/hyperkube:azure-e2e-1194732973954437120-e57aa4c1 I1113 22:10:44.418] The push refers to repository [k8sprowinternal.azurecr.io/hyperkube] I1113 22:10:44.431] f7b03b3feb8c: Preparing I1113 22:10:44.431] 39b3ac6d96e9: Preparing I1113 22:10:44.431] 7bbae4dddb88: Preparing I1113 22:10:44.432] a00defcfe869: Preparing I1113 22:10:44.432] 2ab0ae805c74: Preparing I1113 22:10:44.432] 43a8fe7d2382: Preparing I1113 22:10:44.432] 3f6a6f542637: Preparing I1113 22:10:44.432] 5ba3be777c2d: Preparing I1113 22:10:44.432] 3f6a6f542637: Waiting I1113 22:10:44.432] 5ba3be777c2d: Waiting I1113 22:10:44.540] a00defcfe869: Layer already exists I1113 22:10:44.615] 7bbae4dddb88: Layer already exists I1113 22:10:44.661] f7b03b3feb8c: Mounted from hyperkube-amd64 I1113 22:10:44.675] 43a8fe7d2382: Layer already exists I1113 22:10:44.694] 2ab0ae805c74: Layer already exists I1113 22:10:44.728] 39b3ac6d96e9: Layer already exists I1113 22:10:44.759] 3f6a6f542637: Layer already exists I1113 22:10:44.793] 5ba3be777c2d: Layer already exists I1113 22:10:45.487] azure-e2e-1194732973954437120-e57aa4c1: digest: sha256:5a7f8927d1deb5559d3f4d09a6402d6f6b911abb0f5e9d83a83b0946a70b5f52 size: 1998 I1113 22:10:45.503] make: Leaving directory '/go/src/k8s.io/kubernetes/cluster/images/hyperkube' W1113 22:10:45.603] 2019/11/13 22:10:45 process.go:155: Step 'make -C /go/src/k8s.io/kubernetes/cluster/images/hyperkube push HYPERKUBE_BIN=/go/src/k8s.io/kubernetes/_output/bin/hyperkube' finished in 48.111787672s W1113 22:10:45.604] 2019/11/13 22:10:45 azure.go:686: Docker logout. W1113 22:10:45.604] 2019/11/13 22:10:45 azure.go:754: Custom hyperkube URL: k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194732973954437120-e57aa4c1 . W1113 22:10:45.604] 2019/11/13 22:10:45 azure.go:797: downloading /workspace/aks980228950/kubernetes.json from https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/00938364f6eef541bfee734c314ffb08dcd4ffbd/test/e2e/manifest/in-tree-vmss.json. W1113 22:10:45.604] 2019/11/13 22:10:45 util.go:42: curl https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/00938364f6eef541bfee734c314ffb08dcd4ffbd/test/e2e/manifest/in-tree-vmss.json W1113 22:10:45.757] 2019/11/13 22:10:45 azure.go:529: downloading /workspace/aks-engine.tar.gz from https://github.com/Azure/aks-engine/releases/download/v0.39.1/aks-engine-v0.39.1-linux-amd64.tar.gz. W1113 22:10:45.757] 2019/11/13 22:10:45 util.go:42: curl https://github.com/Azure/aks-engine/releases/download/v0.39.1/aks-engine-v0.39.1-linux-amd64.tar.gz W1113 22:10:47.010] 2019/11/13 22:10:47 azure.go:556: Extracting tar file /workspace/aks-engine.tar.gz into directory /go/src/k8s.io/kubernetes . W1113 22:10:47.011] 2019/11/13 22:10:47 process.go:153: Running: tar -xzf /workspace/aks-engine.tar.gz --strip 1 W1113 22:10:47.140] 2019/11/13 22:10:47 process.go:155: Step 'tar -xzf /workspace/aks-engine.tar.gz --strip 1' finished in 129.131994ms W1113 22:10:47.140] 2019/11/13 22:10:47 process.go:153: Running: /go/src/k8s.io/kubernetes/aks-engine generate /workspace/aks980228950/kubernetes.json --output-directory /workspace/aks980228950 I1113 22:10:47.331] INFO[0000] Generating assets into /workspace/aks980228950... W1113 22:10:58.801] 2019/11/13 22:10:58 process.go:155: Step '/go/src/k8s.io/kubernetes/aks-engine generate /workspace/aks980228950/kubernetes.json --output-directory /workspace/aks980228950' finished in 11.660956063s W1113 22:10:58.804] 2019/11/13 22:10:58 azure.go:627: Setting kubeconfig env variable: kubeconfig path: /workspace/aks980228950/kubeconfig/kubeconfig.westus2.json. W1113 22:10:58.805] 2019/11/13 22:10:58 azure.go:629: Creating resource group: kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59. W1113 22:10:58.805] 2019/11/13 22:10:58 azure.go:631: Creating Azure resource group: kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59 for cluster deployment. W1113 22:10:58.990] 2019/11/13 22:10:58 azure.go:636: Validating deployment ARM templates. W1113 22:10:59.701] 2019/11/13 22:10:59 azure.go:642: Deploying cluster kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59 in resource group kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59. W1113 22:18:00.580] 2019/11/13 22:18:00 process.go:153: Running: kubectl --match-server-version=false version W1113 22:18:00.997] 2019/11/13 22:18:00 process.go:155: Step 'kubectl --match-server-version=false version' finished in 416.642513ms W1113 22:18:00.997] 2019/11/13 22:18:00 process.go:153: Running: kubectl --match-server-version=false get nodes -oyaml W1113 22:18:01.743] 2019/11/13 22:18:01 process.go:155: Step 'kubectl --match-server-version=false get nodes -oyaml' finished in 746.060055ms W1113 22:18:01.743] 2019/11/13 22:18:01 azure.go:980: KUBE_TEST_REPO_LIST_DOWNLOAD_LOCATION not set. Using default test image repos. W1113 22:18:01.743] 2019/11/13 22:18:01 azure.go:980: KUBE_TEST_REPO_LIST_DOWNLOAD_LOCATION not set. Using default test image repos. W1113 22:18:01.744] 2019/11/13 22:18:01 process.go:153: Running: kubectl get nodes --no-headers W1113 22:18:01.849] 2019/11/13 22:18:01 process.go:155: Step 'kubectl get nodes --no-headers' finished in 105.710915ms W1113 22:18:01.849] 2019/11/13 22:18:01 e2e.go:474: Cluster nodes: W1113 22:18:01.849] k8s-agentpool1-12937544-vmss000000 Ready 15s v1.16.4-beta.0.1+d70a3ca08fe72a W1113 22:18:01.850] k8s-agentpool1-12937544-vmss000001 Ready 16s v1.16.4-beta.0.1+d70a3ca08fe72a W1113 22:18:01.850] k8s-master-12937544-0 Ready 15s v1.16.4-beta.0.1+d70a3ca08fe72a W1113 22:18:01.850] 2019/11/13 22:18:01 process.go:153: Running: kubectl --match-server-version=false version W1113 22:18:01.959] 2019/11/13 22:18:01 process.go:155: Step 'kubectl --match-server-version=false version' finished in 109.982948ms W1113 22:18:01.959] 2019/11/13 22:18:01 process.go:153: Running: make e2e-test I1113 22:18:02.060] go test -v -timeout=0 ./test/e2e "-ginkgo.noColor" I1113 22:18:49.136] === RUN TestE2E I1113 22:18:49.137] Running Suite: AzureDisk CSI Driver End-to-End Tests I1113 22:18:49.137] ==================================================== I1113 22:18:49.137] Random Seed: 1573683529 I1113 22:18:49.137] Will run 9 of 9 specs I1113 22:18:49.137] I1113 22:18:49.739] 2019/11/13 22:18:49 Attempting docker login with Azure service principal I1113 22:18:50.597] 2019/11/13 22:18:50 docker login is successful I1113 22:18:50.597] 2019/11/13 22:18:50 Installing Azure Disk CSI Driver... I1113 22:18:50.598] make[1]: Entering directory '/go/src/sigs.k8s.io/azuredisk-csi-driver' I1113 22:18:50.600] # Use v2.11.0 helm to match tiller's version in clusters made by aks-engine I1113 22:18:50.601] curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | DESIRED_VERSION=v2.11.0 bash I1113 22:18:51.047] % Total % Received % Xferd Average Speed Time Time Time Current I1113 22:18:51.048] Dload Upload Total Spent Left Speed I1113 22:18:51.234] 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 7164 100 7164 0 0 38516 0 --:--:-- --:--:-- --:--:-- 38516 I1113 22:18:51.247] Downloading https://get.helm.sh/helm-v2.11.0-linux-amd64.tar.gz I1113 22:18:52.532] Preparing to install helm and tiller into /usr/local/bin I1113 22:18:52.558] helm installed into /usr/local/bin/helm I1113 22:18:52.583] tiller installed into /usr/local/bin/tiller I1113 22:18:52.584] Run 'helm init' to configure helm. I1113 22:18:52.598] # Make sure tiller is ready I1113 22:18:52.599] kubectl wait pod -l name=tiller --namespace kube-system --for condition=ready --timeout 5m I1113 22:18:52.773] pod/tiller-deploy-7559b6b885-qhvlc condition met I1113 22:18:52.775] helm version I1113 22:18:52.820] Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"} I1113 22:18:52.964] Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"} I1113 22:18:52.965] # Only build and push the image if it does not exist in the registry I1113 22:18:52.966] docker pull k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35 || make azuredisk-container push I1113 22:18:53.333] Error response from daemon: manifest for k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35 not found: manifest unknown: manifest unknown I1113 22:18:53.336] make[2]: Entering directory '/go/src/sigs.k8s.io/azuredisk-csi-driver' I1113 22:18:53.340] if [ ! -d ./vendor ]; then dep ensure -vendor-only; fi I1113 22:18:53.341] CGO_ENABLED=0 GOOS=linux go build -a -ldflags "-X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.driverVersion=e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35 -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.gitCommit=6f0d4d24a3db407e360e5c20ecf555590540ca35 -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.buildDate=2019-11-13T22:18:53Z -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.DriverName=disk.csi.azure.com -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.topologyKey=topology.disk.csi.azure.com/zone -extldflags "-static"" -o _output/azurediskplugin ./pkg/azurediskplugin I1113 22:19:28.952] docker build --no-cache -t k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35 -f ./pkg/azurediskplugin/Dockerfile . I1113 22:19:30.669] Sending build context to Docker daemon 214.4MB I1113 22:19:30.694] Step 1/6 : FROM aksrepos.azurecr.io/fundamental/base-ubuntu:v0.0.5 I1113 22:19:30.843] v0.0.5: Pulling from fundamental/base-ubuntu I1113 22:19:30.844] 5667fdb72017: Pulling fs layer I1113 22:19:30.844] d83811f270d5: Pulling fs layer I1113 22:19:30.844] ee671aafb583: Pulling fs layer I1113 22:19:30.844] 7fc152dfb3a6: Pulling fs layer I1113 22:19:30.844] 7940edb1bae7: Pulling fs layer I1113 22:19:30.844] 2ac04aec52fc: Pulling fs layer I1113 22:19:30.844] 3d25cb8b687d: Pulling fs layer I1113 22:19:30.845] 3d25cb8b687d: Waiting I1113 22:19:30.845] 7940edb1bae7: Waiting I1113 22:19:30.845] 2ac04aec52fc: Waiting I1113 22:19:30.845] 7fc152dfb3a6: Waiting I1113 22:19:30.989] ee671aafb583: Verifying Checksum I1113 22:19:30.990] ee671aafb583: Download complete I1113 22:19:31.011] d83811f270d5: Verifying Checksum I1113 22:19:31.062] 7940edb1bae7: Verifying Checksum I1113 22:19:31.062] 7940edb1bae7: Download complete I1113 22:19:31.121] 7fc152dfb3a6: Verifying Checksum I1113 22:19:31.121] 7fc152dfb3a6: Download complete I1113 22:19:31.384] 5667fdb72017: Verifying Checksum I1113 22:19:31.384] 5667fdb72017: Download complete I1113 22:19:31.507] 3d25cb8b687d: Verifying Checksum I1113 22:19:31.507] 3d25cb8b687d: Download complete I1113 22:19:31.692] 2ac04aec52fc: Verifying Checksum I1113 22:19:31.692] 2ac04aec52fc: Download complete I1113 22:19:33.791] 5667fdb72017: Pull complete I1113 22:19:33.959] d83811f270d5: Pull complete I1113 22:19:34.116] ee671aafb583: Pull complete I1113 22:19:34.271] 7fc152dfb3a6: Pull complete I1113 22:19:34.433] 7940edb1bae7: Pull complete I1113 22:19:36.932] 2ac04aec52fc: Pull complete I1113 22:19:37.840] 3d25cb8b687d: Pull complete I1113 22:19:37.865] Digest: sha256:786f291c07d8b7a063d1cabd1a143a217767b34deeca1188565c46f496e50f03 I1113 22:19:37.885] Status: Downloaded newer image for aksrepos.azurecr.io/fundamental/base-ubuntu:v0.0.5 I1113 22:19:37.885] ---> 7730e8d0ed58 I1113 22:19:37.885] Step 2/6 : RUN apt-get update && apt-get install -y util-linux e2fsprogs mount ca-certificates udev xfsprogs I1113 22:19:44.066] ---> Running in 72b6740245f2 I1113 22:19:45.079] Hit:1 http://azure.archive.ubuntu.com/ubuntu bionic InRelease I1113 22:19:45.080] Get:2 http://azure.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB] I1113 22:19:45.081] Get:3 http://azure.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB] I1113 22:19:45.242] Get:4 http://azure.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1312 kB] I1113 22:19:45.270] Get:5 http://azure.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [1030 kB] I1113 22:19:45.285] Get:6 http://azure.archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [29.9 kB] I1113 22:19:45.285] Get:7 http://azure.archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [9571 B] I1113 22:19:45.332] Get:8 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] I1113 22:19:45.342] Get:9 http://azure.archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [4234 B] I1113 22:19:46.108] Get:10 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [16.8 kB] I1113 22:19:46.538] Get:11 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [785 kB] I1113 22:19:47.707] Get:12 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [734 kB] I1113 22:19:48.861] Get:13 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [6504 B] I1113 22:19:48.873] Fetched 4180 kB in 4s (1092 kB/s) I1113 22:19:49.700] Reading package lists... I1113 22:19:50.581] Reading package lists... I1113 22:19:50.745] Building dependency tree... I1113 22:19:50.746] Reading state information... I1113 22:19:50.864] ca-certificates is already the newest version (20180409). I1113 22:19:50.864] e2fsprogs is already the newest version (1.44.1-1ubuntu1.2). I1113 22:19:50.864] mount is already the newest version (2.31.1-0.4ubuntu3.4). I1113 22:19:50.864] util-linux is already the newest version (2.31.1-0.4ubuntu3.4). I1113 22:19:50.864] The following additional packages will be installed: I1113 22:19:50.865] libreadline5 I1113 22:19:50.865] Suggested packages: I1113 22:19:50.865] xfsdump acl attr quota I1113 22:19:50.885] The following NEW packages will be installed: I1113 22:19:50.885] libreadline5 udev xfsprogs I1113 22:19:51.064] 0 upgraded, 3 newly installed, 0 to remove and 10 not upgraded. I1113 22:19:51.065] Need to get 1884 kB of archives. I1113 22:19:51.065] After this operation, 12.2 MB of additional disk space will be used. I1113 22:19:51.065] Get:1 http://azure.archive.ubuntu.com/ubuntu bionic-updates/main amd64 udev amd64 237-3ubuntu10.31 [1102 kB] I1113 22:19:51.244] Get:2 http://azure.archive.ubuntu.com/ubuntu bionic/main amd64 libreadline5 amd64 5.2+dfsg-3build1 [99.5 kB] I1113 22:19:51.250] Get:3 http://azure.archive.ubuntu.com/ubuntu bionic/main amd64 xfsprogs amd64 4.9.0+nmu1ubuntu2 [683 kB] I1113 22:19:51.462] debconf: delaying package configuration, since apt-utils is not installed I1113 22:19:51.494] Fetched 1884 kB in 0s (5130 kB/s) I1113 22:19:51.587] Selecting previously unselected package udev. I1113 22:19:51.592] (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 6917 files and directories currently installed.) I1113 22:19:51.593] Preparing to unpack .../udev_237-3ubuntu10.31_amd64.deb ... I1113 22:19:51.637] Unpacking udev (237-3ubuntu10.31) ... I1113 22:19:51.972] Selecting previously unselected package libreadline5:amd64. I1113 22:19:51.973] Preparing to unpack .../libreadline5_5.2+dfsg-3build1_amd64.deb ... I1113 22:19:52.002] Unpacking libreadline5:amd64 (5.2+dfsg-3build1) ... I1113 22:19:52.137] Selecting previously unselected package xfsprogs. I1113 22:19:52.138] Preparing to unpack .../xfsprogs_4.9.0+nmu1ubuntu2_amd64.deb ... I1113 22:19:52.152] Unpacking xfsprogs (4.9.0+nmu1ubuntu2) ... I1113 22:19:52.373] Setting up udev (237-3ubuntu10.31) ... I1113 22:19:52.783] invoke-rc.d: could not determine current runlevel I1113 22:19:52.786] invoke-rc.d: policy-rc.d denied execution of start. I1113 22:19:52.838] Setting up libreadline5:amd64 (5.2+dfsg-3build1) ... I1113 22:19:52.885] Setting up xfsprogs (4.9.0+nmu1ubuntu2) ... I1113 22:19:52.931] Processing triggers for systemd (237-3ubuntu10.31) ... I1113 22:19:52.961] Processing triggers for libc-bin (2.27-3ubuntu1) ... I1113 22:19:54.576] Removing intermediate container 72b6740245f2 I1113 22:19:54.576] ---> 5ffc542f6839 I1113 22:19:54.576] Step 3/6 : LABEL maintainers="andyzhangx" I1113 22:19:54.854] ---> Running in 3a8c1d3c5bf8 I1113 22:19:55.288] Removing intermediate container 3a8c1d3c5bf8 I1113 22:19:55.288] ---> f7fb7db90a8f I1113 22:19:55.288] Step 4/6 : LABEL description="Azure Disk CSI Driver" I1113 22:19:55.465] ---> Running in 74344613adf8 I1113 22:19:56.089] Removing intermediate container 74344613adf8 I1113 22:19:56.089] ---> f63a8ad00d01 I1113 22:19:56.089] Step 5/6 : COPY ./_output/azurediskplugin /azurediskplugin I1113 22:19:58.490] ---> af2964407d30 I1113 22:19:58.490] Step 6/6 : ENTRYPOINT ["/azurediskplugin"] I1113 22:19:58.638] ---> Running in 2aa3e4662dec I1113 22:19:59.092] Removing intermediate container 2aa3e4662dec I1113 22:19:59.092] ---> 4de3f8266c75 I1113 22:19:59.208] Successfully built 4de3f8266c75 I1113 22:19:59.415] Successfully tagged k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35 I1113 22:19:59.418] docker push k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35 I1113 22:19:59.491] The push refers to repository [k8sprowinternal.azurecr.io/azuredisk-csi] I1113 22:19:59.504] 5bc3f33943e7: Preparing I1113 22:19:59.504] fd929dbaaf5d: Preparing I1113 22:19:59.505] 77098f020d77: Preparing I1113 22:19:59.505] f780fac61456: Preparing I1113 22:19:59.505] f89c15b53b76: Preparing I1113 22:19:59.505] e80c789bc6ac: Preparing I1113 22:19:59.505] 6c3332381368: Preparing I1113 22:19:59.505] ef1a1ec5bba9: Preparing I1113 22:19:59.505] a1aa3da2a80a: Preparing I1113 22:19:59.505] 6c3332381368: Waiting I1113 22:19:59.505] ef1a1ec5bba9: Waiting I1113 22:19:59.506] a1aa3da2a80a: Waiting I1113 22:19:59.676] f89c15b53b76: Layer already exists I1113 22:19:59.744] f780fac61456: Layer already exists I1113 22:19:59.765] 77098f020d77: Layer already exists I1113 22:19:59.787] e80c789bc6ac: Layer already exists I1113 22:19:59.841] 6c3332381368: Layer already exists I1113 22:19:59.865] ef1a1ec5bba9: Layer already exists I1113 22:19:59.886] a1aa3da2a80a: Layer already exists I1113 22:20:01.471] fd929dbaaf5d: Pushed I1113 22:20:02.910] 5bc3f33943e7: Pushed I1113 22:20:03.642] e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35: digest: sha256:76bc4ec5af84acf7250d3b43cf2d1ad6fb770a9e7405406791472fba9a6c6632 size: 2207 I1113 22:20:03.655] make[2]: Leaving directory '/go/src/sigs.k8s.io/azuredisk-csi-driver' I1113 22:20:03.655] helm install charts/latest/azuredisk-csi-driver -n azuredisk-csi-driver --namespace kube-system --wait \ I1113 22:20:03.655] --set image.pullPolicy=IfNotPresent \ I1113 22:20:03.655] --set image.repository=k8sprowinternal.azurecr.io/azuredisk-csi \ I1113 22:20:03.655] --set image.tag=e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35 I1113 22:21:04.220] NAME: azuredisk-csi-driver I1113 22:21:05.030] LAST DEPLOYED: Wed Nov 13 22:20:04 2019 I1113 22:21:05.031] NAMESPACE: kube-system I1113 22:21:05.031] STATUS: DEPLOYED I1113 22:21:05.031] I1113 22:21:05.031] RESOURCES: I1113 22:21:05.032] ==> v1/ServiceAccount I1113 22:21:05.032] NAME AGE I1113 22:21:05.032] csi-azuredisk-controller-sa 58s I1113 22:21:05.032] I1113 22:21:05.032] ==> v1beta1/CustomResourceDefinition I1113 22:21:05.033] csidrivers.csi.storage.k8s.io 58s I1113 22:21:05.033] csinodeinfos.csi.storage.k8s.io 58s I1113 22:21:05.033] I1113 22:21:05.033] ==> v1/ClusterRole I1113 22:21:05.033] azuredisk-external-attacher-role 58s I1113 22:21:05.034] azuredisk-cluster-driver-registrar-role 58s I1113 22:21:05.034] azuredisk-external-provisioner-role 58s I1113 22:21:05.034] azuredisk-external-snapshotter-role 58s I1113 22:21:05.034] I1113 22:21:05.034] ==> v1/ClusterRoleBinding I1113 22:21:05.034] azuredisk-csi-provisioner-binding 58s I1113 22:21:05.035] azuredisk-csi-attacher-binding 58s I1113 22:21:05.035] azuredisk-csi-snapshotter-binding 58s I1113 22:21:05.035] azuredisk-csi-driver-registrar-binding 58s I1113 22:21:05.035] I1113 22:21:05.035] ==> v1/Service I1113 22:21:05.036] csi-azuredisk-controller 58s I1113 22:21:05.036] I1113 22:21:05.036] ==> v1/DaemonSet I1113 22:21:05.036] csi-azuredisk-node 58s I1113 22:21:05.036] I1113 22:21:05.036] ==> v1/Deployment I1113 22:21:05.037] csi-azuredisk-controller 58s I1113 22:21:05.037] I1113 22:21:05.037] ==> v1/Pod(related) I1113 22:21:05.037] I1113 22:21:05.037] NAME READY STATUS RESTARTS AGE I1113 22:21:05.038] csi-azuredisk-node-5c6vt 3/3 Running 0 58s I1113 22:21:05.038] csi-azuredisk-node-vb4gm 3/3 Running 0 58s I1113 22:21:05.038] csi-azuredisk-controller-98cdd8546-5rnp9 6/6 Running 0 58s I1113 22:21:05.038] I1113 22:21:05.038] I1113 22:21:05.039] NOTES: I1113 22:21:05.039] The Azure Disk CSI Driver is getting deployed to your cluster. I1113 22:21:05.039] I1113 22:21:05.039] To check Azure Disk CSI Driver pods status, please run: I1113 22:21:05.039] I1113 22:21:05.039] kubectl --namespace=kube-system get pods --selector="release=azuredisk-csi-driver" --watch I1113 22:21:05.040] I1113 22:21:05.040] make[1]: Leaving directory '/go/src/sigs.k8s.io/azuredisk-csi-driver' I1113 22:21:05.040] 2019/11/13 22:21:05 Azure Disk CSI Driver installed I1113 22:23:04.682] • [SLOW TEST:119.647 seconds] I1113 22:23:04.683] Dynamic Provisioning I1113 22:23:04.683] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:31 I1113 22:23:04.683] should create a volume on demand I1113 22:23:04.683] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:46 I1113 22:23:04.683] ------------------------------ I1113 22:25:20.205] • [SLOW TEST:135.523 seconds] I1113 22:25:20.206] Dynamic Provisioning I1113 22:25:20.206] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:31 I1113 22:25:20.206] should create a raw block volume on demand I1113 22:25:20.206] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:68 I1113 22:25:20.206] ------------------------------ I1113 22:27:32.721] • [SLOW TEST:132.515 seconds] I1113 22:27:32.722] Dynamic Provisioning I1113 22:27:32.722] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:31 I1113 22:27:32.722] should create a volume on demand and mount it as readOnly in a pod I1113 22:27:32.723] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:92 I1113 22:27:32.723] ------------------------------ I1113 22:31:48.224] • [SLOW TEST:255.503 seconds] I1113 22:31:48.225] Dynamic Provisioning I1113 22:31:48.225] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:31 I1113 22:31:48.225] should create multiple PV objects, bind to PVCs and attach all to different pods on the same node I1113 22:31:48.225] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:116 I1113 22:31:48.226] ------------------------------ I1113 22:35:16.036] • [SLOW TEST:207.812 seconds] I1113 22:35:16.036] Dynamic Provisioning I1113 22:35:16.036] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:31 I1113 22:35:16.036] should create a deployment object, write and read to it, delete the pod and write and read to it again I1113 22:35:16.037] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:153 I1113 22:35:16.037] ------------------------------ I1113 22:35:44.392] • [SLOW TEST:28.356 seconds] I1113 22:35:44.392] Dynamic Provisioning I1113 22:35:44.393] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:31 I1113 22:35:44.393] should delete PV with reclaimPolicy "Delete" I1113 22:35:44.393] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:178 I1113 22:35:44.393] ------------------------------ I1113 22:36:12.928] STEP: Creating a kubernetes client I1113 22:36:12.929] Nov 13 22:35:44.392: INFO: >>> kubeConfig: /workspace/aks980228950/kubeconfig/kubeconfig.westus2.json I1113 22:36:12.929] STEP: Building a namespace api object, basename azuredisk I1113 22:36:12.929] STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in azuredisk-2839 I1113 22:36:12.929] STEP: Waiting for a default service account to be provisioned in namespace I1113 22:36:12.929] STEP: setting up the StorageClass I1113 22:36:12.930] STEP: creating a StorageClass I1113 22:36:12.930] STEP: setting up the PVC and PV I1113 22:36:12.930] STEP: creating a PVC I1113 22:36:12.930] STEP: waiting for PVC to be in phase "Bound" I1113 22:36:12.930] Nov 13 22:35:44.554: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-nb588] to have phase Bound I1113 22:36:12.930] Nov 13 22:35:44.562: INFO: PersistentVolumeClaim pvc-nb588 found but phase is Pending instead of Bound. I1113 22:36:12.931] Nov 13 22:35:46.565: INFO: PersistentVolumeClaim pvc-nb588 found but phase is Pending instead of Bound. I1113 22:36:12.931] Nov 13 22:35:48.569: INFO: PersistentVolumeClaim pvc-nb588 found but phase is Pending instead of Bound. I1113 22:36:12.931] Nov 13 22:35:50.572: INFO: PersistentVolumeClaim pvc-nb588 found but phase is Pending instead of Bound. I1113 22:36:12.931] Nov 13 22:35:52.576: INFO: PersistentVolumeClaim pvc-nb588 found but phase is Pending instead of Bound. I1113 22:36:12.931] Nov 13 22:35:54.580: INFO: PersistentVolumeClaim pvc-nb588 found but phase is Pending instead of Bound. I1113 22:36:12.932] Nov 13 22:35:56.583: INFO: PersistentVolumeClaim pvc-nb588 found and phase=Bound (12.029315191s) I1113 22:36:12.932] STEP: checking the PVC I1113 22:36:12.932] STEP: validating provisioned PV I1113 22:36:12.932] STEP: checking the PV I1113 22:36:12.932] Nov 13 22:35:56.599: INFO: deleting PVC "azuredisk-2839"/"pvc-nb588" I1113 22:36:12.932] Nov 13 22:35:56.599: INFO: Deleting PersistentVolumeClaim "pvc-nb588" I1113 22:36:12.932] Nov 13 22:35:56.603: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2839 to be removed I1113 22:36:12.933] Nov 13 22:35:56.606: INFO: Claim "azuredisk-2839" in namespace "pvc-nb588" doesn't exist in the system I1113 22:36:12.933] Nov 13 22:35:56.606: INFO: Waiting up to 10m0s for PersistentVolume pvc-b0242f2e-c665-4943-b1c0-e650b55bca62 to have phase Released I1113 22:36:12.934] Nov 13 22:35:56.608: INFO: PersistentVolume pvc-b0242f2e-c665-4943-b1c0-e650b55bca62 found but phase is Bound instead of Released. I1113 22:36:12.934] Nov 13 22:36:01.611: INFO: PersistentVolume pvc-b0242f2e-c665-4943-b1c0-e650b55bca62 found and phase=Released (5.005404829s) I1113 22:36:12.934] STEP: deleting PV "pvc-b0242f2e-c665-4943-b1c0-e650b55bca62" I1113 22:36:12.934] Nov 13 22:36:01.611: INFO: Deleting PersistentVolume "pvc-b0242f2e-c665-4943-b1c0-e650b55bca62" I1113 22:36:12.934] STEP: waiting for claim's PV "pvc-b0242f2e-c665-4943-b1c0-e650b55bca62" to be deleted I1113 22:36:12.935] Nov 13 22:36:01.616: INFO: Waiting up to 10m0s for PersistentVolume pvc-b0242f2e-c665-4943-b1c0-e650b55bca62 to get deleted I1113 22:36:12.935] Nov 13 22:36:01.618: INFO: PersistentVolume pvc-b0242f2e-c665-4943-b1c0-e650b55bca62 found and phase=Released (2.309718ms) I1113 22:36:12.935] Nov 13 22:36:06.622: INFO: PersistentVolume pvc-b0242f2e-c665-4943-b1c0-e650b55bca62 was removed I1113 22:36:12.935] STEP: Collecting events from namespace "azuredisk-2839". I1113 22:36:12.935] STEP: Found 1 events. I1113 22:36:12.936] Nov 13 22:36:06.624: INFO: At 2019-11-13 22:35:54 +0000 UTC - event for pvc-nb588: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-b0242f2e-c665-4943-b1c0-e650b55bca62 using kubernetes.io/azure-disk I1113 22:36:12.936] Nov 13 22:36:06.626: INFO: POD NODE PHASE GRACE CONDITIONS I1113 22:36:12.936] Nov 13 22:36:06.626: INFO: I1113 22:36:12.936] Nov 13 22:36:06.630: INFO: I1113 22:36:12.936] Logging node info for node k8s-agentpool1-12937544-vmss000000 I1113 22:36:12.939] Nov 13 22:36:06.632: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-agentpool1-12937544-vmss000000,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-agentpool1-12937544-vmss000000,UID:37d3525c-8b3d-4f5c-aa05-bdc93a75b69d,ResourceVersion:2680,Generation:0,CreationTimestamp:2019-11-13 22:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{agentpool: agentpool1,beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_DS2_v2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 0,kubernetes.azure.com/cluster: kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59,kubernetes.azure.com/role: agent,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-agentpool1-12937544-vmss000000,kubernetes.io/os: linux,kubernetes.io/role: agent,node-role.kubernetes.io/agent: ,storageprofile: managed,storagetier: Premium_LRS,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"disk.csi.azure.com":"k8s-agentpool1-12937544-vmss000000"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-12937544-vmss/virtualMachines/0,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{7284887552 0} {} 7114148Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6498455552 0} {} 6346148Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 22:18:09 +0000 UTC 2019-11-13 22:18:09 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 22:35:28 +0000 UTC 2019-11-13 22:17:38 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 22:35:28 +0000 UTC 2019-11-13 22:17:38 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 22:35:28 +0000 UTC 2019-11-13 22:17:38 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 22:35:28 +0000 UTC 2019-11-13 22:17:59 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-agentpool1-12937544-vmss000000} {InternalIP 10.240.0.4}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0995afc9ebc548dbabc09484fecfb45c,SystemUUID:9B37DB48-2AC3-DD47-B25E-A570A97840FE,BootID:c8102fbe-842d-4592-8823-9601ab08a77c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:5a7f8927d1deb5559d3f4d09a6402d6f6b911abb0f5e9d83a83b0946a70b5f52 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194732973954437120-e57aa4c1] 604811790} {[k8sprowinternal.azurecr.io/azuredisk-csi@sha256:76bc4ec5af84acf7250d3b43cf2d1ad6fb770a9e7405406791472fba9a6c6632 k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35] 287084775} {[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1] 121711221} {[quay.io/k8scsi/csi-attacher@sha256:6425af42299ba211de685a94953a5c4c6fcbfd2494e445437dd9ebd70b28bf8a quay.io/k8scsi/csi-attacher:v1.0.1] 50168619} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0] 50144412} {[quay.io/k8scsi/csi-provisioner@sha256:7d7d832832b536f32e899669a32d4fb75ab972da20c21a2bd6043eb498cf58e8 quay.io/k8scsi/csi-provisioner:v1.0.1] 47974767} {[quay.io/k8scsi/csi-cluster-driver-registrar@sha256:fafd75ae5442f192cfa8c2e792903aee30d5884b62e802e4464b0a895d21e3ef quay.io/k8scsi/csi-cluster-driver-registrar:v1.0.1] 45874691} {[quay.io/k8scsi/csi-snapshotter@sha256:a49e0da1af6f2bf717e41ba1eee8b5e6a1cbd66a709dd92cc43fe475fe2589eb quay.io/k8scsi/csi-snapshotter:v1.1.0] 44325352} {[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7] 22909487} {[quay.io/k8scsi/livenessprobe@sha256:8c9c8a0f330acc2e558ae6624309447d02522e76aa35af23fa19b88abf4381cb quay.io/k8scsi/livenessprobe:v1.0.2] 17415381} {[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0] 15815995} {[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0] 14967303} {[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8] 1159025} {[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 22:36:12.940] Nov 13 22:36:06.633: INFO: I1113 22:36:12.940] Logging kubelet events for node k8s-agentpool1-12937544-vmss000000 I1113 22:36:12.940] Nov 13 22:36:06.643: INFO: I1113 22:36:12.940] Logging pods the kubelet thinks is on node k8s-agentpool1-12937544-vmss000000 I1113 22:36:12.940] Nov 13 22:36:06.682: INFO: csi-azuredisk-node-vb4gm started at 2019-11-13 22:20:08 +0000 UTC (0+3 container statuses recorded) I1113 22:36:12.940] Nov 13 22:36:06.682: INFO: Container azuredisk ready: true, restart count 0 I1113 22:36:12.941] Nov 13 22:36:06.682: INFO: Container liveness-probe ready: true, restart count 0 I1113 22:36:12.941] Nov 13 22:36:06.682: INFO: Container node-driver-registrar ready: true, restart count 0 I1113 22:36:12.941] Nov 13 22:36:06.682: INFO: blobfuse-flexvol-installer-qk9r5 started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.941] Nov 13 22:36:06.682: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 I1113 22:36:12.941] Nov 13 22:36:06.682: INFO: azure-ip-masq-agent-cfj9x started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.942] Nov 13 22:36:06.682: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 22:36:12.942] Nov 13 22:36:06.682: INFO: keyvault-flexvolume-5gs88 started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.942] Nov 13 22:36:06.682: INFO: Container keyvault-flexvolume ready: true, restart count 0 I1113 22:36:12.942] Nov 13 22:36:06.682: INFO: kube-proxy-jdmdh started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.942] Nov 13 22:36:06.682: INFO: Container kube-proxy ready: true, restart count 0 I1113 22:36:12.943] Nov 13 22:36:06.682: INFO: kubernetes-dashboard-65966766b9-2954b started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.943] Nov 13 22:36:06.682: INFO: Container kubernetes-dashboard ready: true, restart count 0 I1113 22:36:12.943] Nov 13 22:36:06.682: INFO: csi-azuredisk-controller-98cdd8546-5rnp9 started at 2019-11-13 22:20:08 +0000 UTC (0+6 container statuses recorded) I1113 22:36:12.943] Nov 13 22:36:06.682: INFO: Container azuredisk ready: true, restart count 0 I1113 22:36:12.943] Nov 13 22:36:06.682: INFO: Container cluster-driver-registrar ready: true, restart count 0 I1113 22:36:12.943] Nov 13 22:36:06.682: INFO: Container csi-attacher ready: true, restart count 0 I1113 22:36:12.944] Nov 13 22:36:06.682: INFO: Container csi-provisioner ready: true, restart count 0 I1113 22:36:12.944] Nov 13 22:36:06.682: INFO: Container csi-snapshotter ready: true, restart count 0 I1113 22:36:12.944] Nov 13 22:36:06.682: INFO: Container liveness-probe ready: true, restart count 0 I1113 22:36:12.944] Nov 13 22:36:06.708: INFO: I1113 22:36:12.944] Latency metrics for node k8s-agentpool1-12937544-vmss000000 I1113 22:36:12.945] Nov 13 22:36:06.708: INFO: I1113 22:36:12.945] Logging node info for node k8s-agentpool1-12937544-vmss000001 I1113 22:36:12.947] Nov 13 22:36:06.713: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-agentpool1-12937544-vmss000001,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-agentpool1-12937544-vmss000001,UID:17c48526-b21a-4e42-aae6-775c320037de,ResourceVersion:2700,Generation:0,CreationTimestamp:2019-11-13 22:17:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{agentpool: agentpool1,beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_DS2_v2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 1,kubernetes.azure.com/cluster: kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59,kubernetes.azure.com/role: agent,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-agentpool1-12937544-vmss000001,kubernetes.io/os: linux,kubernetes.io/role: agent,node-role.kubernetes.io/agent: ,storageprofile: managed,storagetier: Premium_LRS,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"disk.csi.azure.com":"k8s-agentpool1-12937544-vmss000001"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-12937544-vmss/virtualMachines/1,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{7284883456 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6498451456 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 22:18:09 +0000 UTC 2019-11-13 22:18:09 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 22:35:38 +0000 UTC 2019-11-13 22:17:37 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 22:35:38 +0000 UTC 2019-11-13 22:17:37 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 22:35:38 +0000 UTC 2019-11-13 22:17:37 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 22:35:38 +0000 UTC 2019-11-13 22:17:59 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-agentpool1-12937544-vmss000001} {InternalIP 10.240.0.5}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:25155ac3a5134033808ded837f6eb212,SystemUUID:E84CCC28-BEA9-6440-B057-DE6D1433212C,BootID:4228cd00-89ac-4b30-bebe-5361d9bdeeae,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:5a7f8927d1deb5559d3f4d09a6402d6f6b911abb0f5e9d83a83b0946a70b5f52 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194732973954437120-e57aa4c1] 604811790} {[k8sprowinternal.azurecr.io/azuredisk-csi@sha256:76bc4ec5af84acf7250d3b43cf2d1ad6fb770a9e7405406791472fba9a6c6632 k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35] 287084775} {[gcr.io/kubernetes-helm/tiller@sha256:f6d8f4ab9ba993b5f5b60a6edafe86352eabe474ffeb84cb6c79b8866dce45d1 gcr.io/kubernetes-helm/tiller:v2.11.0] 71821984} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0] 50144412} {[k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 k8s.gcr.io/metrics-server-amd64:v0.2.1] 42541759} {[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7] 22909487} {[quay.io/k8scsi/livenessprobe@sha256:8c9c8a0f330acc2e558ae6624309447d02522e76aa35af23fa19b88abf4381cb quay.io/k8scsi/livenessprobe:v1.0.2] 17415381} {[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0] 15815995} {[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8] 1159025} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 22:36:12.948] Nov 13 22:36:06.714: INFO: I1113 22:36:12.948] Logging kubelet events for node k8s-agentpool1-12937544-vmss000001 I1113 22:36:12.948] Nov 13 22:36:06.718: INFO: I1113 22:36:12.948] Logging pods the kubelet thinks is on node k8s-agentpool1-12937544-vmss000001 I1113 22:36:12.948] Nov 13 22:36:06.747: INFO: kube-proxy-rf2w8 started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.949] Nov 13 22:36:06.747: INFO: Container kube-proxy ready: true, restart count 0 I1113 22:36:12.949] Nov 13 22:36:06.747: INFO: tiller-deploy-7559b6b885-qhvlc started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.949] Nov 13 22:36:06.747: INFO: Container tiller ready: true, restart count 0 I1113 22:36:12.949] Nov 13 22:36:06.747: INFO: csi-azuredisk-node-5c6vt started at 2019-11-13 22:20:08 +0000 UTC (0+3 container statuses recorded) I1113 22:36:12.949] Nov 13 22:36:06.747: INFO: Container azuredisk ready: true, restart count 0 I1113 22:36:12.949] Nov 13 22:36:06.747: INFO: Container liveness-probe ready: true, restart count 0 I1113 22:36:12.950] Nov 13 22:36:06.747: INFO: Container node-driver-registrar ready: true, restart count 0 I1113 22:36:12.950] Nov 13 22:36:06.747: INFO: azure-ip-masq-agent-bqn2d started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.950] Nov 13 22:36:06.747: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 22:36:12.950] Nov 13 22:36:06.747: INFO: keyvault-flexvolume-5c7wt started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.950] Nov 13 22:36:06.747: INFO: Container keyvault-flexvolume ready: true, restart count 0 I1113 22:36:12.951] Nov 13 22:36:06.747: INFO: blobfuse-flexvol-installer-j279b started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.951] Nov 13 22:36:06.747: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 I1113 22:36:12.951] Nov 13 22:36:06.747: INFO: metrics-server-58ff8c5ddf-95j9s started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.951] Nov 13 22:36:06.747: INFO: Container metrics-server ready: true, restart count 0 I1113 22:36:12.951] Nov 13 22:36:06.774: INFO: I1113 22:36:12.951] Latency metrics for node k8s-agentpool1-12937544-vmss000001 I1113 22:36:12.952] Nov 13 22:36:06.774: INFO: I1113 22:36:12.952] Logging node info for node k8s-master-12937544-0 I1113 22:36:12.954] Nov 13 22:36:06.776: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-master-12937544-0,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-master-12937544-0,UID:492b2d9c-778d-4450-abc6-e3eada56c212,ResourceVersion:2701,Generation:0,CreationTimestamp:2019-11-13 22:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_DS2_v2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 0,kubernetes.azure.com/cluster: kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59,kubernetes.azure.com/role: master,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-master-12937544-0,kubernetes.io/os: linux,kubernetes.io/role: master,node-role.kubernetes.io/master: ,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59/providers/Microsoft.Compute/virtualMachines/k8s-master-12937544-0,Unschedulable:false,Taints:[{node-role.kubernetes.io/master true NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{7284887552 0} {} 7114148Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6498455552 0} {} 6346148Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 22:18:09 +0000 UTC 2019-11-13 22:18:09 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 22:35:38 +0000 UTC 2019-11-13 22:17:39 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 22:35:38 +0000 UTC 2019-11-13 22:17:39 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 22:35:38 +0000 UTC 2019-11-13 22:17:39 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 22:35:38 +0000 UTC 2019-11-13 22:17:59 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-master-12937544-0} {InternalIP 10.240.255.5}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2fa025b3c45042998d322c67eb21ee4b,SystemUUID:834F691E-5CED-674C-9470-D1A9B4745FDF,BootID:e1d19d43-373d-4ea0-a6e2-993be3959d40,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:5a7f8927d1deb5559d3f4d09a6402d6f6b911abb0f5e9d83a83b0946a70b5f52 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194732973954437120-e57aa4c1] 604811790} {[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1] 83076194} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0] 50144412} {[k8s.gcr.io/coredns@sha256:e83beb5e43f8513fa735e77ffc5859640baea30a882a11cc75c4c3244a737d3c k8s.gcr.io/coredns:1.5.0] 42488424} {[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 22:36:12.954] Nov 13 22:36:06.776: INFO: I1113 22:36:12.954] Logging kubelet events for node k8s-master-12937544-0 I1113 22:36:12.954] Nov 13 22:36:06.781: INFO: I1113 22:36:12.955] Logging pods the kubelet thinks is on node k8s-master-12937544-0 I1113 22:36:12.955] Nov 13 22:36:06.811: INFO: kube-addon-manager-k8s-master-12937544-0 started at 2019-11-13 22:17:33 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.955] Nov 13 22:36:06.811: INFO: Container kube-addon-manager ready: true, restart count 0 I1113 22:36:12.955] Nov 13 22:36:06.811: INFO: kube-apiserver-k8s-master-12937544-0 started at 2019-11-13 22:17:33 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.956] Nov 13 22:36:06.811: INFO: Container kube-apiserver ready: true, restart count 0 I1113 22:36:12.956] Nov 13 22:36:06.811: INFO: kube-controller-manager-k8s-master-12937544-0 started at 2019-11-13 22:17:33 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.956] Nov 13 22:36:06.811: INFO: Container kube-controller-manager ready: true, restart count 0 I1113 22:36:12.956] Nov 13 22:36:06.811: INFO: kube-scheduler-k8s-master-12937544-0 started at 2019-11-13 22:17:33 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.956] Nov 13 22:36:06.811: INFO: Container kube-scheduler ready: true, restart count 0 I1113 22:36:12.956] Nov 13 22:36:06.811: INFO: coredns-87f5d796-h64wt started at 2019-11-13 22:18:09 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.957] Nov 13 22:36:06.811: INFO: Container coredns ready: true, restart count 0 I1113 22:36:12.957] Nov 13 22:36:06.811: INFO: azure-ip-masq-agent-vl6mk started at 2019-11-13 22:18:09 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.957] Nov 13 22:36:06.811: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 22:36:12.957] Nov 13 22:36:06.811: INFO: kube-proxy-dfmsj started at 2019-11-13 22:18:09 +0000 UTC (0+1 container statuses recorded) I1113 22:36:12.957] Nov 13 22:36:06.811: INFO: Container kube-proxy ready: true, restart count 0 I1113 22:36:12.958] Nov 13 22:36:06.831: INFO: I1113 22:36:12.958] Latency metrics for node k8s-master-12937544-0 I1113 22:36:12.958] Nov 13 22:36:06.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I1113 22:36:12.958] STEP: Destroying namespace "azuredisk-2839" for this suite. I1113 22:36:12.958] Nov 13 22:36:12.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered I1113 22:36:12.958] Nov 13 22:36:12.928: INFO: namespace azuredisk-2839 deletion completed in 6.094196253s I1113 22:36:12.959] •! Panic [28.536 seconds] I1113 22:36:12.959] Dynamic Provisioning I1113 22:36:12.959] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:31 I1113 22:36:12.959] [env] should retain PV with reclaimPolicy "Retain" [It] I1113 22:36:12.959] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:194 I1113 22:36:12.959] I1113 22:36:12.959] Test Panicked I1113 22:36:12.960] runtime error: invalid memory address or nil pointer dereference I1113 22:36:12.960] /usr/local/go/src/runtime/panic.go:82 I1113 22:36:12.960] I1113 22:36:12.960] Full Stack Trace I1113 22:36:12.960] /usr/local/go/src/runtime/panic.go:522 +0x1b5 I1113 22:36:12.960] sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPersistentVolumeClaim).DeleteBackingVolume(0xc000b2e0c0, 0xc00048d560) I1113 22:36:12.961] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:349 +0x42 I1113 22:36:12.961] sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedReclaimPolicyTest).Run(0xc0006894d8, 0x20dfe20, 0xc000cfb680, 0xc000bb5760) I1113 22:36:12.961] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_reclaim_policy_tester.go:44 +0x1cc I1113 22:36:12.961] sigs.k8s.io/azuredisk-csi-driver/test/e2e.glob..func1.8() I1113 22:36:12.961] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:208 +0x181 I1113 22:36:12.962] sigs.k8s.io/azuredisk-csi-driver/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00057b020, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) I1113 22:36:12.962] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:133 +0xa4 I1113 22:36:12.962] testing.tRunner(0xc000506500, 0x1e2c680) I1113 22:36:12.962] /usr/local/go/src/testing/testing.go:865 +0xc0 I1113 22:36:12.962] created by testing.(*T).Run I1113 22:36:12.962] /usr/local/go/src/testing/testing.go:916 +0x35a I1113 22:36:12.962] I1113 22:36:12.963] ------------------------------ I1113 22:36:35.035] STEP: Creating a kubernetes client I1113 22:36:35.035] Nov 13 22:36:12.928: INFO: >>> kubeConfig: /workspace/aks980228950/kubeconfig/kubeconfig.westus2.json I1113 22:36:35.036] STEP: Building a namespace api object, basename azuredisk I1113 22:36:35.036] STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in azuredisk-1247 I1113 22:36:35.036] STEP: Waiting for a default service account to be provisioned in namespace I1113 22:36:35.036] STEP: Successfully provisioned AzureDisk volume: "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59/providers/Microsoft.Compute/disks/pre-provisioned-readOnly" I1113 22:36:35.037] I1113 22:36:35.037] STEP: setting up the PV I1113 22:36:35.037] STEP: creating a PV I1113 22:36:35.038] Nov 13 22:36:23.687: INFO: Unexpected error occurred: PersistentVolume "azuredisk-1247-kubernetes.io-azure-disk-preprovsioned-pv-ln9lm" is invalid: spec.csi.driver: Invalid value: "kubernetes.io/azure-disk": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') I1113 22:36:35.038] STEP: Collecting events from namespace "azuredisk-1247". I1113 22:36:35.038] STEP: Found 0 events. I1113 22:36:35.038] Nov 13 22:36:23.692: INFO: POD NODE PHASE GRACE CONDITIONS I1113 22:36:35.038] Nov 13 22:36:23.692: INFO: I1113 22:36:35.039] Nov 13 22:36:23.697: INFO: I1113 22:36:35.039] Logging node info for node k8s-agentpool1-12937544-vmss000000 I1113 22:36:35.042] Nov 13 22:36:23.700: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-agentpool1-12937544-vmss000000,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-agentpool1-12937544-vmss000000,UID:37d3525c-8b3d-4f5c-aa05-bdc93a75b69d,ResourceVersion:2680,Generation:0,CreationTimestamp:2019-11-13 22:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{agentpool: agentpool1,beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_DS2_v2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 0,kubernetes.azure.com/cluster: kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59,kubernetes.azure.com/role: agent,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-agentpool1-12937544-vmss000000,kubernetes.io/os: linux,kubernetes.io/role: agent,node-role.kubernetes.io/agent: ,storageprofile: managed,storagetier: Premium_LRS,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"disk.csi.azure.com":"k8s-agentpool1-12937544-vmss000000"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-12937544-vmss/virtualMachines/0,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{7284887552 0} {} 7114148Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6498455552 0} {} 6346148Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 22:18:09 +0000 UTC 2019-11-13 22:18:09 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 22:35:28 +0000 UTC 2019-11-13 22:17:38 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 22:35:28 +0000 UTC 2019-11-13 22:17:38 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 22:35:28 +0000 UTC 2019-11-13 22:17:38 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 22:35:28 +0000 UTC 2019-11-13 22:17:59 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-agentpool1-12937544-vmss000000} {InternalIP 10.240.0.4}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0995afc9ebc548dbabc09484fecfb45c,SystemUUID:9B37DB48-2AC3-DD47-B25E-A570A97840FE,BootID:c8102fbe-842d-4592-8823-9601ab08a77c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:5a7f8927d1deb5559d3f4d09a6402d6f6b911abb0f5e9d83a83b0946a70b5f52 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194732973954437120-e57aa4c1] 604811790} {[k8sprowinternal.azurecr.io/azuredisk-csi@sha256:76bc4ec5af84acf7250d3b43cf2d1ad6fb770a9e7405406791472fba9a6c6632 k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35] 287084775} {[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1] 121711221} {[quay.io/k8scsi/csi-attacher@sha256:6425af42299ba211de685a94953a5c4c6fcbfd2494e445437dd9ebd70b28bf8a quay.io/k8scsi/csi-attacher:v1.0.1] 50168619} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0] 50144412} {[quay.io/k8scsi/csi-provisioner@sha256:7d7d832832b536f32e899669a32d4fb75ab972da20c21a2bd6043eb498cf58e8 quay.io/k8scsi/csi-provisioner:v1.0.1] 47974767} {[quay.io/k8scsi/csi-cluster-driver-registrar@sha256:fafd75ae5442f192cfa8c2e792903aee30d5884b62e802e4464b0a895d21e3ef quay.io/k8scsi/csi-cluster-driver-registrar:v1.0.1] 45874691} {[quay.io/k8scsi/csi-snapshotter@sha256:a49e0da1af6f2bf717e41ba1eee8b5e6a1cbd66a709dd92cc43fe475fe2589eb quay.io/k8scsi/csi-snapshotter:v1.1.0] 44325352} {[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7] 22909487} {[quay.io/k8scsi/livenessprobe@sha256:8c9c8a0f330acc2e558ae6624309447d02522e76aa35af23fa19b88abf4381cb quay.io/k8scsi/livenessprobe:v1.0.2] 17415381} {[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0] 15815995} {[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0] 14967303} {[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8] 1159025} {[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 22:36:35.042] Nov 13 22:36:23.701: INFO: I1113 22:36:35.042] Logging kubelet events for node k8s-agentpool1-12937544-vmss000000 I1113 22:36:35.043] Nov 13 22:36:23.705: INFO: I1113 22:36:35.043] Logging pods the kubelet thinks is on node k8s-agentpool1-12937544-vmss000000 I1113 22:36:35.043] Nov 13 22:36:23.711: INFO: csi-azuredisk-node-vb4gm started at 2019-11-13 22:20:08 +0000 UTC (0+3 container statuses recorded) I1113 22:36:35.043] Nov 13 22:36:23.711: INFO: Container azuredisk ready: true, restart count 0 I1113 22:36:35.044] Nov 13 22:36:23.711: INFO: Container liveness-probe ready: true, restart count 0 I1113 22:36:35.044] Nov 13 22:36:23.711: INFO: Container node-driver-registrar ready: true, restart count 0 I1113 22:36:35.044] Nov 13 22:36:23.711: INFO: blobfuse-flexvol-installer-qk9r5 started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.044] Nov 13 22:36:23.711: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 I1113 22:36:35.044] Nov 13 22:36:23.711: INFO: azure-ip-masq-agent-cfj9x started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.045] Nov 13 22:36:23.711: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 22:36:35.045] Nov 13 22:36:23.711: INFO: keyvault-flexvolume-5gs88 started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.045] Nov 13 22:36:23.711: INFO: Container keyvault-flexvolume ready: true, restart count 0 I1113 22:36:35.045] Nov 13 22:36:23.711: INFO: kube-proxy-jdmdh started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.045] Nov 13 22:36:23.711: INFO: Container kube-proxy ready: true, restart count 0 I1113 22:36:35.045] Nov 13 22:36:23.711: INFO: kubernetes-dashboard-65966766b9-2954b started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.046] Nov 13 22:36:23.711: INFO: Container kubernetes-dashboard ready: true, restart count 0 I1113 22:36:35.046] Nov 13 22:36:23.711: INFO: csi-azuredisk-controller-98cdd8546-5rnp9 started at 2019-11-13 22:20:08 +0000 UTC (0+6 container statuses recorded) I1113 22:36:35.046] Nov 13 22:36:23.711: INFO: Container azuredisk ready: true, restart count 0 I1113 22:36:35.046] Nov 13 22:36:23.711: INFO: Container cluster-driver-registrar ready: true, restart count 0 I1113 22:36:35.046] Nov 13 22:36:23.711: INFO: Container csi-attacher ready: true, restart count 0 I1113 22:36:35.046] Nov 13 22:36:23.711: INFO: Container csi-provisioner ready: true, restart count 0 I1113 22:36:35.047] Nov 13 22:36:23.711: INFO: Container csi-snapshotter ready: true, restart count 0 I1113 22:36:35.047] Nov 13 22:36:23.711: INFO: Container liveness-probe ready: true, restart count 0 I1113 22:36:35.047] Nov 13 22:36:23.734: INFO: I1113 22:36:35.047] Latency metrics for node k8s-agentpool1-12937544-vmss000000 I1113 22:36:35.047] Nov 13 22:36:23.734: INFO: I1113 22:36:35.047] Logging node info for node k8s-agentpool1-12937544-vmss000001 I1113 22:36:35.050] Nov 13 22:36:23.736: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-agentpool1-12937544-vmss000001,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-agentpool1-12937544-vmss000001,UID:17c48526-b21a-4e42-aae6-775c320037de,ResourceVersion:2700,Generation:0,CreationTimestamp:2019-11-13 22:17:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{agentpool: agentpool1,beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_DS2_v2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 1,kubernetes.azure.com/cluster: kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59,kubernetes.azure.com/role: agent,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-agentpool1-12937544-vmss000001,kubernetes.io/os: linux,kubernetes.io/role: agent,node-role.kubernetes.io/agent: ,storageprofile: managed,storagetier: Premium_LRS,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"disk.csi.azure.com":"k8s-agentpool1-12937544-vmss000001"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-12937544-vmss/virtualMachines/1,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{7284883456 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6498451456 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 22:18:09 +0000 UTC 2019-11-13 22:18:09 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 22:35:38 +0000 UTC 2019-11-13 22:17:37 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 22:35:38 +0000 UTC 2019-11-13 22:17:37 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 22:35:38 +0000 UTC 2019-11-13 22:17:37 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 22:35:38 +0000 UTC 2019-11-13 22:17:59 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-agentpool1-12937544-vmss000001} {InternalIP 10.240.0.5}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:25155ac3a5134033808ded837f6eb212,SystemUUID:E84CCC28-BEA9-6440-B057-DE6D1433212C,BootID:4228cd00-89ac-4b30-bebe-5361d9bdeeae,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:5a7f8927d1deb5559d3f4d09a6402d6f6b911abb0f5e9d83a83b0946a70b5f52 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194732973954437120-e57aa4c1] 604811790} {[k8sprowinternal.azurecr.io/azuredisk-csi@sha256:76bc4ec5af84acf7250d3b43cf2d1ad6fb770a9e7405406791472fba9a6c6632 k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35] 287084775} {[gcr.io/kubernetes-helm/tiller@sha256:f6d8f4ab9ba993b5f5b60a6edafe86352eabe474ffeb84cb6c79b8866dce45d1 gcr.io/kubernetes-helm/tiller:v2.11.0] 71821984} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0] 50144412} {[k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 k8s.gcr.io/metrics-server-amd64:v0.2.1] 42541759} {[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7] 22909487} {[quay.io/k8scsi/livenessprobe@sha256:8c9c8a0f330acc2e558ae6624309447d02522e76aa35af23fa19b88abf4381cb quay.io/k8scsi/livenessprobe:v1.0.2] 17415381} {[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0] 15815995} {[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8] 1159025} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 22:36:35.050] Nov 13 22:36:23.736: INFO: I1113 22:36:35.050] Logging kubelet events for node k8s-agentpool1-12937544-vmss000001 I1113 22:36:35.050] Nov 13 22:36:23.741: INFO: I1113 22:36:35.050] Logging pods the kubelet thinks is on node k8s-agentpool1-12937544-vmss000001 I1113 22:36:35.050] Nov 13 22:36:23.746: INFO: blobfuse-flexvol-installer-j279b started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.051] Nov 13 22:36:23.746: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 I1113 22:36:35.051] Nov 13 22:36:23.746: INFO: metrics-server-58ff8c5ddf-95j9s started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.051] Nov 13 22:36:23.746: INFO: Container metrics-server ready: true, restart count 0 I1113 22:36:35.051] Nov 13 22:36:23.746: INFO: tiller-deploy-7559b6b885-qhvlc started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.051] Nov 13 22:36:23.746: INFO: Container tiller ready: true, restart count 0 I1113 22:36:35.051] Nov 13 22:36:23.746: INFO: csi-azuredisk-node-5c6vt started at 2019-11-13 22:20:08 +0000 UTC (0+3 container statuses recorded) I1113 22:36:35.052] Nov 13 22:36:23.746: INFO: Container azuredisk ready: true, restart count 0 I1113 22:36:35.052] Nov 13 22:36:23.746: INFO: Container liveness-probe ready: true, restart count 0 I1113 22:36:35.052] Nov 13 22:36:23.746: INFO: Container node-driver-registrar ready: true, restart count 0 I1113 22:36:35.052] Nov 13 22:36:23.746: INFO: azure-ip-masq-agent-bqn2d started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.052] Nov 13 22:36:23.746: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 22:36:35.052] Nov 13 22:36:23.746: INFO: keyvault-flexvolume-5c7wt started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.052] Nov 13 22:36:23.746: INFO: Container keyvault-flexvolume ready: true, restart count 0 I1113 22:36:35.053] Nov 13 22:36:23.746: INFO: kube-proxy-rf2w8 started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.053] Nov 13 22:36:23.746: INFO: Container kube-proxy ready: true, restart count 0 I1113 22:36:35.053] Nov 13 22:36:23.767: INFO: I1113 22:36:35.053] Latency metrics for node k8s-agentpool1-12937544-vmss000001 I1113 22:36:35.053] Nov 13 22:36:23.767: INFO: I1113 22:36:35.053] Logging node info for node k8s-master-12937544-0 I1113 22:36:35.060] Nov 13 22:36:23.770: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-master-12937544-0,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-master-12937544-0,UID:492b2d9c-778d-4450-abc6-e3eada56c212,ResourceVersion:2701,Generation:0,CreationTimestamp:2019-11-13 22:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_DS2_v2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 0,kubernetes.azure.com/cluster: kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59,kubernetes.azure.com/role: master,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-master-12937544-0,kubernetes.io/os: linux,kubernetes.io/role: master,node-role.kubernetes.io/master: ,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59/providers/Microsoft.Compute/virtualMachines/k8s-master-12937544-0,Unschedulable:false,Taints:[{node-role.kubernetes.io/master true NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{7284887552 0} {} 7114148Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6498455552 0} {} 6346148Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 22:18:09 +0000 UTC 2019-11-13 22:18:09 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 22:35:38 +0000 UTC 2019-11-13 22:17:39 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 22:35:38 +0000 UTC 2019-11-13 22:17:39 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 22:35:38 +0000 UTC 2019-11-13 22:17:39 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 22:35:38 +0000 UTC 2019-11-13 22:17:59 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-master-12937544-0} {InternalIP 10.240.255.5}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2fa025b3c45042998d322c67eb21ee4b,SystemUUID:834F691E-5CED-674C-9470-D1A9B4745FDF,BootID:e1d19d43-373d-4ea0-a6e2-993be3959d40,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:5a7f8927d1deb5559d3f4d09a6402d6f6b911abb0f5e9d83a83b0946a70b5f52 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194732973954437120-e57aa4c1] 604811790} {[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1] 83076194} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0] 50144412} {[k8s.gcr.io/coredns@sha256:e83beb5e43f8513fa735e77ffc5859640baea30a882a11cc75c4c3244a737d3c k8s.gcr.io/coredns:1.5.0] 42488424} {[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 22:36:35.060] Nov 13 22:36:23.770: INFO: I1113 22:36:35.060] Logging kubelet events for node k8s-master-12937544-0 I1113 22:36:35.060] Nov 13 22:36:23.774: INFO: I1113 22:36:35.060] Logging pods the kubelet thinks is on node k8s-master-12937544-0 I1113 22:36:35.060] Nov 13 22:36:23.779: INFO: kube-addon-manager-k8s-master-12937544-0 started at 2019-11-13 22:17:33 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.060] Nov 13 22:36:23.779: INFO: Container kube-addon-manager ready: true, restart count 0 I1113 22:36:35.061] Nov 13 22:36:23.779: INFO: kube-apiserver-k8s-master-12937544-0 started at 2019-11-13 22:17:33 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.061] Nov 13 22:36:23.779: INFO: Container kube-apiserver ready: true, restart count 0 I1113 22:36:35.061] Nov 13 22:36:23.779: INFO: kube-controller-manager-k8s-master-12937544-0 started at 2019-11-13 22:17:33 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.061] Nov 13 22:36:23.779: INFO: Container kube-controller-manager ready: true, restart count 0 I1113 22:36:35.061] Nov 13 22:36:23.779: INFO: kube-scheduler-k8s-master-12937544-0 started at 2019-11-13 22:17:33 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.061] Nov 13 22:36:23.779: INFO: Container kube-scheduler ready: true, restart count 0 I1113 22:36:35.062] Nov 13 22:36:23.779: INFO: coredns-87f5d796-h64wt started at 2019-11-13 22:18:09 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.062] Nov 13 22:36:23.779: INFO: Container coredns ready: true, restart count 0 I1113 22:36:35.062] Nov 13 22:36:23.779: INFO: azure-ip-masq-agent-vl6mk started at 2019-11-13 22:18:09 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.062] Nov 13 22:36:23.779: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 22:36:35.062] Nov 13 22:36:23.779: INFO: kube-proxy-dfmsj started at 2019-11-13 22:18:09 +0000 UTC (0+1 container statuses recorded) I1113 22:36:35.062] Nov 13 22:36:23.779: INFO: Container kube-proxy ready: true, restart count 0 I1113 22:36:35.062] Nov 13 22:36:23.807: INFO: I1113 22:36:35.063] Latency metrics for node k8s-master-12937544-0 I1113 22:36:35.063] Nov 13 22:36:23.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I1113 22:36:35.063] STEP: Destroying namespace "azuredisk-1247" for this suite. I1113 22:36:35.063] Nov 13 22:36:29.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered I1113 22:36:35.063] Nov 13 22:36:29.899: INFO: namespace azuredisk-1247 deletion completed in 6.088588295s I1113 22:36:35.063] • Failure [22.106 seconds] I1113 22:36:35.063] [azuredisk-csi-e2e] [single-az] Pre-Provisioned I1113 22:36:35.064] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning.go:36 I1113 22:36:35.064] [env] should use a pre-provisioned volume and mount it as readOnly in a pod [It] I1113 22:36:35.064] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning.go:66 I1113 22:36:35.064] I1113 22:36:35.064] Unexpected error: I1113 22:36:35.064] <*errors.StatusError | 0xc000c5e640>: { I1113 22:36:35.064] ErrStatus: { I1113 22:36:35.064] TypeMeta: {Kind: "", APIVersion: ""}, I1113 22:36:35.064] ListMeta: { I1113 22:36:35.065] SelfLink: "", I1113 22:36:35.065] ResourceVersion: "", I1113 22:36:35.065] Continue: "", I1113 22:36:35.065] RemainingItemCount: nil, I1113 22:36:35.065] }, I1113 22:36:35.065] Status: "Failure", I1113 22:36:35.066] Message: "PersistentVolume \"azuredisk-1247-kubernetes.io-azure-disk-preprovsioned-pv-ln9lm\" is invalid: spec.csi.driver: Invalid value: \"kubernetes.io/azure-disk\": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')", I1113 22:36:35.066] Reason: "Invalid", I1113 22:36:35.066] Details: { I1113 22:36:35.066] Name: "azuredisk-1247-kubernetes.io-azure-disk-preprovsioned-pv-ln9lm", I1113 22:36:35.066] Group: "", I1113 22:36:35.066] Kind: "PersistentVolume", I1113 22:36:35.066] UID: "", I1113 22:36:35.066] Causes: [ I1113 22:36:35.067] { I1113 22:36:35.067] Type: "FieldValueInvalid", I1113 22:36:35.067] Message: "Invalid value: \"kubernetes.io/azure-disk\": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')", I1113 22:36:35.067] Field: "spec.csi.driver", I1113 22:36:35.067] }, I1113 22:36:35.067] ], I1113 22:36:35.067] RetryAfterSeconds: 0, I1113 22:36:35.067] }, I1113 22:36:35.068] Code: 422, I1113 22:36:35.068] }, I1113 22:36:35.068] } I1113 22:36:35.068] PersistentVolume "azuredisk-1247-kubernetes.io-azure-disk-preprovsioned-pv-ln9lm" is invalid: spec.csi.driver: Invalid value: "kubernetes.io/azure-disk": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') I1113 22:36:35.068] occurred I1113 22:36:35.068] I1113 22:36:35.068] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:174 I1113 22:36:35.069] ------------------------------ I1113 22:36:56.809] STEP: Creating a kubernetes client I1113 22:36:56.809] Nov 13 22:36:35.035: INFO: >>> kubeConfig: /workspace/aks980228950/kubeconfig/kubeconfig.westus2.json I1113 22:36:56.809] STEP: Building a namespace api object, basename azuredisk I1113 22:36:56.809] STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in azuredisk-9368 I1113 22:36:56.810] STEP: Waiting for a default service account to be provisioned in namespace I1113 22:36:56.810] STEP: Successfully provisioned AzureDisk volume: "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59/providers/Microsoft.Compute/disks/pre-provisioned-retain-reclaimPolicy" I1113 22:36:56.810] I1113 22:36:56.810] STEP: setting up the PV I1113 22:36:56.810] STEP: creating a PV I1113 22:36:56.811] Nov 13 22:36:45.431: INFO: Unexpected error occurred: PersistentVolume "azuredisk-9368-kubernetes.io-azure-disk-preprovsioned-pv-58b66" is invalid: spec.csi.driver: Invalid value: "kubernetes.io/azure-disk": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') I1113 22:36:56.811] STEP: Collecting events from namespace "azuredisk-9368". I1113 22:36:56.811] STEP: Found 0 events. I1113 22:36:56.811] Nov 13 22:36:45.436: INFO: POD NODE PHASE GRACE CONDITIONS I1113 22:36:56.812] Nov 13 22:36:45.436: INFO: I1113 22:36:56.812] Nov 13 22:36:45.441: INFO: I1113 22:36:56.812] Logging node info for node k8s-agentpool1-12937544-vmss000000 I1113 22:36:56.815] Nov 13 22:36:45.443: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-agentpool1-12937544-vmss000000,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-agentpool1-12937544-vmss000000,UID:37d3525c-8b3d-4f5c-aa05-bdc93a75b69d,ResourceVersion:2811,Generation:0,CreationTimestamp:2019-11-13 22:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{agentpool: agentpool1,beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_DS2_v2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 0,kubernetes.azure.com/cluster: kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59,kubernetes.azure.com/role: agent,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-agentpool1-12937544-vmss000000,kubernetes.io/os: linux,kubernetes.io/role: agent,node-role.kubernetes.io/agent: ,storageprofile: managed,storagetier: Premium_LRS,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"disk.csi.azure.com":"k8s-agentpool1-12937544-vmss000000"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-12937544-vmss/virtualMachines/0,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{7284887552 0} {} 7114148Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6498455552 0} {} 6346148Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 22:18:09 +0000 UTC 2019-11-13 22:18:09 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 22:36:28 +0000 UTC 2019-11-13 22:17:38 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 22:36:28 +0000 UTC 2019-11-13 22:17:38 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 22:36:28 +0000 UTC 2019-11-13 22:17:38 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 22:36:28 +0000 UTC 2019-11-13 22:17:59 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-agentpool1-12937544-vmss000000} {InternalIP 10.240.0.4}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0995afc9ebc548dbabc09484fecfb45c,SystemUUID:9B37DB48-2AC3-DD47-B25E-A570A97840FE,BootID:c8102fbe-842d-4592-8823-9601ab08a77c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:5a7f8927d1deb5559d3f4d09a6402d6f6b911abb0f5e9d83a83b0946a70b5f52 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194732973954437120-e57aa4c1] 604811790} {[k8sprowinternal.azurecr.io/azuredisk-csi@sha256:76bc4ec5af84acf7250d3b43cf2d1ad6fb770a9e7405406791472fba9a6c6632 k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35] 287084775} {[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1] 121711221} {[quay.io/k8scsi/csi-attacher@sha256:6425af42299ba211de685a94953a5c4c6fcbfd2494e445437dd9ebd70b28bf8a quay.io/k8scsi/csi-attacher:v1.0.1] 50168619} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0] 50144412} {[quay.io/k8scsi/csi-provisioner@sha256:7d7d832832b536f32e899669a32d4fb75ab972da20c21a2bd6043eb498cf58e8 quay.io/k8scsi/csi-provisioner:v1.0.1] 47974767} {[quay.io/k8scsi/csi-cluster-driver-registrar@sha256:fafd75ae5442f192cfa8c2e792903aee30d5884b62e802e4464b0a895d21e3ef quay.io/k8scsi/csi-cluster-driver-registrar:v1.0.1] 45874691} {[quay.io/k8scsi/csi-snapshotter@sha256:a49e0da1af6f2bf717e41ba1eee8b5e6a1cbd66a709dd92cc43fe475fe2589eb quay.io/k8scsi/csi-snapshotter:v1.1.0] 44325352} {[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7] 22909487} {[quay.io/k8scsi/livenessprobe@sha256:8c9c8a0f330acc2e558ae6624309447d02522e76aa35af23fa19b88abf4381cb quay.io/k8scsi/livenessprobe:v1.0.2] 17415381} {[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0] 15815995} {[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0] 14967303} {[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8] 1159025} {[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 22:36:56.815] Nov 13 22:36:45.443: INFO: I1113 22:36:56.815] Logging kubelet events for node k8s-agentpool1-12937544-vmss000000 I1113 22:36:56.816] Nov 13 22:36:45.447: INFO: I1113 22:36:56.816] Logging pods the kubelet thinks is on node k8s-agentpool1-12937544-vmss000000 I1113 22:36:56.816] Nov 13 22:36:45.453: INFO: csi-azuredisk-node-vb4gm started at 2019-11-13 22:20:08 +0000 UTC (0+3 container statuses recorded) I1113 22:36:56.816] Nov 13 22:36:45.453: INFO: Container azuredisk ready: true, restart count 0 I1113 22:36:56.817] Nov 13 22:36:45.453: INFO: Container liveness-probe ready: true, restart count 0 I1113 22:36:56.817] Nov 13 22:36:45.453: INFO: Container node-driver-registrar ready: true, restart count 0 I1113 22:36:56.817] Nov 13 22:36:45.453: INFO: blobfuse-flexvol-installer-qk9r5 started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.817] Nov 13 22:36:45.453: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 I1113 22:36:56.818] Nov 13 22:36:45.453: INFO: azure-ip-masq-agent-cfj9x started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.818] Nov 13 22:36:45.453: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 22:36:56.818] Nov 13 22:36:45.453: INFO: keyvault-flexvolume-5gs88 started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.819] Nov 13 22:36:45.453: INFO: Container keyvault-flexvolume ready: true, restart count 0 I1113 22:36:56.819] Nov 13 22:36:45.453: INFO: kube-proxy-jdmdh started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.819] Nov 13 22:36:45.453: INFO: Container kube-proxy ready: true, restart count 0 I1113 22:36:56.819] Nov 13 22:36:45.453: INFO: kubernetes-dashboard-65966766b9-2954b started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.820] Nov 13 22:36:45.453: INFO: Container kubernetes-dashboard ready: true, restart count 0 I1113 22:36:56.820] Nov 13 22:36:45.453: INFO: csi-azuredisk-controller-98cdd8546-5rnp9 started at 2019-11-13 22:20:08 +0000 UTC (0+6 container statuses recorded) I1113 22:36:56.821] Nov 13 22:36:45.453: INFO: Container azuredisk ready: true, restart count 0 I1113 22:36:56.821] Nov 13 22:36:45.453: INFO: Container cluster-driver-registrar ready: true, restart count 0 I1113 22:36:56.821] Nov 13 22:36:45.453: INFO: Container csi-attacher ready: true, restart count 0 I1113 22:36:56.822] Nov 13 22:36:45.453: INFO: Container csi-provisioner ready: true, restart count 0 I1113 22:36:56.822] Nov 13 22:36:45.453: INFO: Container csi-snapshotter ready: true, restart count 0 I1113 22:36:56.822] Nov 13 22:36:45.453: INFO: Container liveness-probe ready: true, restart count 0 I1113 22:36:56.822] Nov 13 22:36:45.470: INFO: I1113 22:36:56.823] Latency metrics for node k8s-agentpool1-12937544-vmss000000 I1113 22:36:56.823] Nov 13 22:36:45.470: INFO: I1113 22:36:56.823] Logging node info for node k8s-agentpool1-12937544-vmss000001 I1113 22:36:56.827] Nov 13 22:36:45.473: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-agentpool1-12937544-vmss000001,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-agentpool1-12937544-vmss000001,UID:17c48526-b21a-4e42-aae6-775c320037de,ResourceVersion:2837,Generation:0,CreationTimestamp:2019-11-13 22:17:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{agentpool: agentpool1,beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_DS2_v2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 1,kubernetes.azure.com/cluster: kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59,kubernetes.azure.com/role: agent,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-agentpool1-12937544-vmss000001,kubernetes.io/os: linux,kubernetes.io/role: agent,node-role.kubernetes.io/agent: ,storageprofile: managed,storagetier: Premium_LRS,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"disk.csi.azure.com":"k8s-agentpool1-12937544-vmss000001"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-12937544-vmss/virtualMachines/1,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{7284883456 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6498451456 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 22:18:09 +0000 UTC 2019-11-13 22:18:09 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 22:36:38 +0000 UTC 2019-11-13 22:17:37 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 22:36:38 +0000 UTC 2019-11-13 22:17:37 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 22:36:38 +0000 UTC 2019-11-13 22:17:37 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 22:36:38 +0000 UTC 2019-11-13 22:17:59 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-agentpool1-12937544-vmss000001} {InternalIP 10.240.0.5}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:25155ac3a5134033808ded837f6eb212,SystemUUID:E84CCC28-BEA9-6440-B057-DE6D1433212C,BootID:4228cd00-89ac-4b30-bebe-5361d9bdeeae,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:5a7f8927d1deb5559d3f4d09a6402d6f6b911abb0f5e9d83a83b0946a70b5f52 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194732973954437120-e57aa4c1] 604811790} {[k8sprowinternal.azurecr.io/azuredisk-csi@sha256:76bc4ec5af84acf7250d3b43cf2d1ad6fb770a9e7405406791472fba9a6c6632 k8sprowinternal.azurecr.io/azuredisk-csi:e2e-6f0d4d24a3db407e360e5c20ecf555590540ca35] 287084775} {[gcr.io/kubernetes-helm/tiller@sha256:f6d8f4ab9ba993b5f5b60a6edafe86352eabe474ffeb84cb6c79b8866dce45d1 gcr.io/kubernetes-helm/tiller:v2.11.0] 71821984} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0] 50144412} {[k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 k8s.gcr.io/metrics-server-amd64:v0.2.1] 42541759} {[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7] 22909487} {[quay.io/k8scsi/livenessprobe@sha256:8c9c8a0f330acc2e558ae6624309447d02522e76aa35af23fa19b88abf4381cb quay.io/k8scsi/livenessprobe:v1.0.2] 17415381} {[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0] 15815995} {[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8] 1159025} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 22:36:56.828] Nov 13 22:36:45.473: INFO: I1113 22:36:56.828] Logging kubelet events for node k8s-agentpool1-12937544-vmss000001 I1113 22:36:56.828] Nov 13 22:36:45.477: INFO: I1113 22:36:56.828] Logging pods the kubelet thinks is on node k8s-agentpool1-12937544-vmss000001 I1113 22:36:56.829] Nov 13 22:36:45.482: INFO: azure-ip-masq-agent-bqn2d started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.829] Nov 13 22:36:45.482: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 22:36:56.829] Nov 13 22:36:45.482: INFO: keyvault-flexvolume-5c7wt started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.829] Nov 13 22:36:45.482: INFO: Container keyvault-flexvolume ready: true, restart count 0 I1113 22:36:56.830] Nov 13 22:36:45.482: INFO: kube-proxy-rf2w8 started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.830] Nov 13 22:36:45.482: INFO: Container kube-proxy ready: true, restart count 0 I1113 22:36:56.830] Nov 13 22:36:45.482: INFO: tiller-deploy-7559b6b885-qhvlc started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.830] Nov 13 22:36:45.482: INFO: Container tiller ready: true, restart count 0 I1113 22:36:56.831] Nov 13 22:36:45.482: INFO: csi-azuredisk-node-5c6vt started at 2019-11-13 22:20:08 +0000 UTC (0+3 container statuses recorded) I1113 22:36:56.831] Nov 13 22:36:45.482: INFO: Container azuredisk ready: true, restart count 0 I1113 22:36:56.831] Nov 13 22:36:45.482: INFO: Container liveness-probe ready: true, restart count 0 I1113 22:36:56.831] Nov 13 22:36:45.482: INFO: Container node-driver-registrar ready: true, restart count 0 I1113 22:36:56.832] Nov 13 22:36:45.482: INFO: blobfuse-flexvol-installer-j279b started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.832] Nov 13 22:36:45.482: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 I1113 22:36:56.832] Nov 13 22:36:45.482: INFO: metrics-server-58ff8c5ddf-95j9s started at 2019-11-13 22:18:10 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.832] Nov 13 22:36:45.482: INFO: Container metrics-server ready: true, restart count 0 I1113 22:36:56.833] Nov 13 22:36:45.503: INFO: I1113 22:36:56.833] Latency metrics for node k8s-agentpool1-12937544-vmss000001 I1113 22:36:56.833] Nov 13 22:36:45.503: INFO: I1113 22:36:56.833] Logging node info for node k8s-master-12937544-0 I1113 22:36:56.836] Nov 13 22:36:45.506: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-master-12937544-0,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-master-12937544-0,UID:492b2d9c-778d-4450-abc6-e3eada56c212,ResourceVersion:2838,Generation:0,CreationTimestamp:2019-11-13 22:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_DS2_v2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: westus2,failure-domain.beta.kubernetes.io/zone: 0,kubernetes.azure.com/cluster: kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59,kubernetes.azure.com/role: master,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-master-12937544-0,kubernetes.io/os: linux,kubernetes.io/role: master,node-role.kubernetes.io/master: ,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59/providers/Microsoft.Compute/virtualMachines/k8s-master-12937544-0,Unschedulable:false,Taints:[{node-role.kubernetes.io/master true NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{7284887552 0} {} 7114148Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {} 8 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6498455552 0} {} 6346148Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-13 22:18:09 +0000 UTC 2019-11-13 22:18:09 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-13 22:36:38 +0000 UTC 2019-11-13 22:17:39 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-13 22:36:38 +0000 UTC 2019-11-13 22:17:39 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-13 22:36:38 +0000 UTC 2019-11-13 22:17:39 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-13 22:36:38 +0000 UTC 2019-11-13 22:17:59 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{Hostname k8s-master-12937544-0} {InternalIP 10.240.255.5}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2fa025b3c45042998d322c67eb21ee4b,SystemUUID:834F691E-5CED-674C-9470-D1A9B4745FDF,BootID:e1d19d43-373d-4ea0-a6e2-993be3959d40,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:5a7f8927d1deb5559d3f4d09a6402d6f6b911abb0f5e9d83a83b0946a70b5f52 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1194732973954437120-e57aa4c1] 604811790} {[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1] 83076194} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0] 50144412} {[k8s.gcr.io/coredns@sha256:e83beb5e43f8513fa735e77ffc5859640baea30a882a11cc75c4c3244a737d3c k8s.gcr.io/coredns:1.5.0] 42488424} {[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} I1113 22:36:56.836] Nov 13 22:36:45.506: INFO: I1113 22:36:56.837] Logging kubelet events for node k8s-master-12937544-0 I1113 22:36:56.837] Nov 13 22:36:45.514: INFO: I1113 22:36:56.837] Logging pods the kubelet thinks is on node k8s-master-12937544-0 I1113 22:36:56.837] Nov 13 22:36:45.519: INFO: kube-proxy-dfmsj started at 2019-11-13 22:18:09 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.838] Nov 13 22:36:45.519: INFO: Container kube-proxy ready: true, restart count 0 I1113 22:36:56.838] Nov 13 22:36:45.519: INFO: kube-addon-manager-k8s-master-12937544-0 started at 2019-11-13 22:17:33 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.838] Nov 13 22:36:45.519: INFO: Container kube-addon-manager ready: true, restart count 0 I1113 22:36:56.838] Nov 13 22:36:45.519: INFO: kube-apiserver-k8s-master-12937544-0 started at 2019-11-13 22:17:33 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.839] Nov 13 22:36:45.519: INFO: Container kube-apiserver ready: true, restart count 0 I1113 22:36:56.839] Nov 13 22:36:45.519: INFO: kube-controller-manager-k8s-master-12937544-0 started at 2019-11-13 22:17:33 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.839] Nov 13 22:36:45.519: INFO: Container kube-controller-manager ready: true, restart count 0 I1113 22:36:56.840] Nov 13 22:36:45.519: INFO: kube-scheduler-k8s-master-12937544-0 started at 2019-11-13 22:17:33 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.840] Nov 13 22:36:45.519: INFO: Container kube-scheduler ready: true, restart count 0 I1113 22:36:56.840] Nov 13 22:36:45.519: INFO: coredns-87f5d796-h64wt started at 2019-11-13 22:18:09 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.840] Nov 13 22:36:45.519: INFO: Container coredns ready: true, restart count 0 I1113 22:36:56.841] Nov 13 22:36:45.519: INFO: azure-ip-masq-agent-vl6mk started at 2019-11-13 22:18:09 +0000 UTC (0+1 container statuses recorded) I1113 22:36:56.841] Nov 13 22:36:45.519: INFO: Container azure-ip-masq-agent ready: true, restart count 0 I1113 22:36:56.841] Nov 13 22:36:45.542: INFO: I1113 22:36:56.841] Latency metrics for node k8s-master-12937544-0 I1113 22:36:56.842] Nov 13 22:36:45.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I1113 22:36:56.842] STEP: Destroying namespace "azuredisk-9368" for this suite. I1113 22:36:56.842] Nov 13 22:36:51.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered I1113 22:36:56.842] Nov 13 22:36:51.644: INFO: namespace azuredisk-9368 deletion completed in 6.099664466s I1113 22:36:56.842] • Failure [21.773 seconds] I1113 22:36:56.843] [azuredisk-csi-e2e] [single-az] Pre-Provisioned I1113 22:36:56.843] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning.go:36 I1113 22:36:56.843] [env] should use a pre-provisioned volume and retain PV with reclaimPolicy "Retain" [It] I1113 22:36:56.844] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning.go:100 I1113 22:36:56.844] I1113 22:36:56.844] Unexpected error: I1113 22:36:56.844] <*errors.StatusError | 0xc00073ca00>: { I1113 22:36:56.844] ErrStatus: { I1113 22:36:56.845] TypeMeta: {Kind: "", APIVersion: ""}, I1113 22:36:56.845] ListMeta: { I1113 22:36:56.845] SelfLink: "", I1113 22:36:56.845] ResourceVersion: "", I1113 22:36:56.845] Continue: "", I1113 22:36:56.846] RemainingItemCount: nil, I1113 22:36:56.846] }, I1113 22:36:56.846] Status: "Failure", I1113 22:36:56.846] Message: "PersistentVolume \"azuredisk-9368-kubernetes.io-azure-disk-preprovsioned-pv-58b66\" is invalid: spec.csi.driver: Invalid value: \"kubernetes.io/azure-disk\": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')", I1113 22:36:56.847] Reason: "Invalid", I1113 22:36:56.847] Details: { I1113 22:36:56.847] Name: "azuredisk-9368-kubernetes.io-azure-disk-preprovsioned-pv-58b66", I1113 22:36:56.847] Group: "", I1113 22:36:56.848] Kind: "PersistentVolume", I1113 22:36:56.848] UID: "", I1113 22:36:56.848] Causes: [ I1113 22:36:56.848] { I1113 22:36:56.848] Type: "FieldValueInvalid", I1113 22:36:56.849] Message: "Invalid value: \"kubernetes.io/azure-disk\": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')", I1113 22:36:56.849] Field: "spec.csi.driver", I1113 22:36:56.849] }, I1113 22:36:56.849] ], I1113 22:36:56.850] RetryAfterSeconds: 0, I1113 22:36:56.850] }, I1113 22:36:56.850] Code: 422, I1113 22:36:56.850] }, I1113 22:36:56.850] } I1113 22:36:56.851] PersistentVolume "azuredisk-9368-kubernetes.io-azure-disk-preprovsioned-pv-58b66" is invalid: spec.csi.driver: Invalid value: "kubernetes.io/azure-disk": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') I1113 22:36:56.851] occurred I1113 22:36:56.851] I1113 22:36:56.851] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:174 I1113 22:36:56.852] ------------------------------ I1113 22:36:56.852] 2019/11/13 22:36:56 Uninstalling Azure Disk CSI Driver... I1113 22:36:56.852] make[1]: Entering directory '/go/src/sigs.k8s.io/azuredisk-csi-driver' I1113 22:36:56.852] helm delete --purge azuredisk-csi-driver I1113 22:37:01.017] release "azuredisk-csi-driver" deleted I1113 22:37:01.019] make[1]: Leaving directory '/go/src/sigs.k8s.io/azuredisk-csi-driver' I1113 22:37:01.020] 2019/11/13 22:37:01 Azure Disk CSI Driver uninstalled I1113 22:37:01.020] I1113 22:37:01.020] I1113 22:37:01.020] Summarizing 3 Failures: I1113 22:37:01.020] I1113 22:37:01.021] [Panic!] Dynamic Provisioning [It] [env] should retain PV with reclaimPolicy "Retain" I1113 22:37:01.021] /usr/local/go/src/runtime/panic.go:82 I1113 22:37:01.021] I1113 22:37:01.021] [Fail] [azuredisk-csi-e2e] [single-az] Pre-Provisioned [It] [env] should use a pre-provisioned volume and mount it as readOnly in a pod I1113 22:37:01.021] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:174 I1113 22:37:01.022] I1113 22:37:01.022] [Fail] [azuredisk-csi-e2e] [single-az] Pre-Provisioned [It] [env] should use a pre-provisioned volume and retain PV with reclaimPolicy "Retain" I1113 22:37:01.022] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:174 I1113 22:37:01.022] I1113 22:37:01.022] Ran 9 of 9 Specs in 1091.883 seconds I1113 22:37:01.022] FAIL! -- 6 Passed | 3 Failed | 0 Pending | 0 Skipped I1113 22:37:01.023] --- FAIL: TestE2E (1091.88s) I1113 22:37:01.023] FAIL I1113 22:37:01.023] FAIL sigs.k8s.io/azuredisk-csi-driver/test/e2e 1091.930s W1113 22:37:01.199] make: *** [Makefile:59: e2e-test] Error 1 W1113 22:37:01.200] 2019/11/13 22:37:01 process.go:155: Step 'make e2e-test' finished in 18m59.24057542s W1113 22:37:01.200] 2019/11/13 22:37:01 azure.go:922: Skippng DumpClusterLogs due to CCM not being enabled. W1113 22:37:01.200] 2019/11/13 22:37:01 azure.go:916: Deleting resource group: kubetest-1f0f747c-065f-11ea-a26d-0242d1fa2a59. W1113 22:44:46.713] 2019/11/13 22:44:46 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml. W1113 22:44:46.714] 2019/11/13 22:44:46 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" W1113 22:44:46.847] 2019/11/13 22:44:46 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 133.312039ms W1113 22:44:46.847] 2019/11/13 22:44:46 main.go:319: Something went wrong: encountered 1 errors: [error during make e2e-test: exit status 2] W1113 22:44:46.850] Traceback (most recent call last): W1113 22:44:46.851] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in W1113 22:44:46.852] main(parse_args()) W1113 22:44:46.852] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main W1113 22:44:46.852] mode.start(runner_args) W1113 22:44:46.852] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start W1113 22:44:46.852] check_env(env, self.command, *args) W1113 22:44:46.852] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env W1113 22:44:46.852] subprocess.check_call(cmd, env=env) W1113 22:44:46.853] File "/usr/lib/python2.7/subprocess.py", line 190, in check_call W1113 22:44:46.853] raise CalledProcessError(retcode, cmd) W1113 22:44:46.854] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--up', '--down', '--test', '--deployment=acsengine', '--provider=skeleton', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--ginkgo-parallel=1', '--acsengine-agentpoolcount=2', '--acsengine-admin-username=azureuser', '--acsengine-creds=/etc/azure-cred/credentials', '--acsengine-orchestratorRelease=1.16', '--acsengine-mastervmsize=Standard_DS2_v2', '--acsengine-agentvmsize=Standard_DS2_v2', '--acsengine-hyperkube=True', '--acsengine-location=westus2', '--acsengine-public-key=/etc/azure-ssh/azure-ssh-pub', '--acsengine-template-url=https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/00938364f6eef541bfee734c314ffb08dcd4ffbd/test/e2e/manifest/in-tree-vmss.json', '--acsengine-download-url=https://github.com/Azure/aks-engine/releases/download/v0.39.1/aks-engine-v0.39.1-linux-amd64.tar.gz', '--test-azure-disk-csi-driver=True', '--timeout=420m')' returned non-zero exit status 1 E1113 22:44:46.862] Command failed I1113 22:44:46.862] process 685 exited with code 1 after 57.8m E1113 22:44:46.862] FAIL: chewong-pull-in-tree-azure-disk-e2e I1113 22:44:46.863] Call: gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json W1113 22:44:47.635] Activated service account credentials for: [kubernetes-upstream@kubernetes-upstream.iam.gserviceaccount.com] I1113 22:44:47.666] process 8447 exited with code 0 after 0.0m I1113 22:44:47.667] Call: gcloud config get-value account I1113 22:44:47.885] process 8459 exited with code 0 after 0.0m I1113 22:44:47.885] Will upload results to gs://kubernetes-upstream/pr-logs using kubernetes-upstream@kubernetes-upstream.iam.gserviceaccount.com I1113 22:44:47.885] Upload result and artifacts... I1113 22:44:47.885] Gubernator results at https://gubernator.k8s.io/build/kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e/1194732973954437120 I1113 22:44:47.886] Call: gsutil ls gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e/1194732973954437120/artifacts W1113 22:44:48.861] CommandException: One or more URLs matched no objects. E1113 22:44:48.938] Command failed I1113 22:44:48.938] process 8471 exited with code 1 after 0.0m W1113 22:44:48.938] Remote dir gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e/1194732973954437120/artifacts not exist yet I1113 22:44:48.938] Call: gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e/1194732973954437120/artifacts I1113 22:44:50.282] process 8615 exited with code 0 after 0.0m I1113 22:44:50.283] Call: git rev-parse HEAD I1113 22:44:50.286] process 9144 exited with code 0 after 0.0m I1113 22:44:50.286] Call: git rev-parse HEAD I1113 22:44:50.290] process 9145 exited with code 0 after 0.0m I1113 22:44:50.290] Call: gsutil stat gs://kubernetes-upstream/pr-logs/directory/chewong-pull-in-tree-azure-disk-e2e/jobResultsCache.json I1113 22:44:51.003] process 9146 exited with code 0 after 0.0m I1113 22:44:51.004] Call: gsutil -q cat 'gs://kubernetes-upstream/pr-logs/directory/chewong-pull-in-tree-azure-disk-e2e/jobResultsCache.json#1573681420504036' I1113 22:44:51.927] process 9290 exited with code 0 after 0.0m I1113 22:44:51.928] Call: gsutil -q -h Content-Type:application/json -h x-goog-if-generation-match:1573681420504036 cp /tmp/gsutil_yOueEG gs://kubernetes-upstream/pr-logs/directory/chewong-pull-in-tree-azure-disk-e2e/jobResultsCache.json I1113 22:44:53.019] process 9436 exited with code 0 after 0.0m I1113 22:44:53.019] Call: gsutil stat gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e/jobResultsCache.json I1113 22:44:53.779] process 9616 exited with code 0 after 0.0m I1113 22:44:53.780] Call: gsutil -q cat 'gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e/jobResultsCache.json#1573681423137474' I1113 22:44:54.804] process 9760 exited with code 0 after 0.0m I1113 22:44:54.805] Call: gsutil -q -h Content-Type:application/json -h x-goog-if-generation-match:1573681423137474 cp /tmp/gsutil_OC7EVj gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e/jobResultsCache.json I1113 22:44:55.944] process 9906 exited with code 0 after 0.0m I1113 22:44:55.945] Call: gsutil -q -h Content-Type:application/json cp /tmp/gsutil_u2jNwo gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e/1194732973954437120/finished.json I1113 22:44:57.045] process 10086 exited with code 0 after 0.0m I1113 22:44:57.046] Call: gsutil -q -h Content-Type:text/plain -h 'Cache-Control:private, max-age=0, no-transform' cp /tmp/gsutil_9taoVT gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/188/chewong-pull-in-tree-azure-disk-e2e/latest-build.txt I1113 22:44:58.290] process 10266 exited with code 0 after 0.0m I1113 22:44:58.291] Call: gsutil -q -h Content-Type:text/plain -h 'Cache-Control:private, max-age=0, no-transform' cp /tmp/gsutil_AALJPh gs://kubernetes-upstream/pr-logs/directory/chewong-pull-in-tree-azure-disk-e2e/latest-build.txt I1113 22:44:59.444] process 10446 exited with code 0 after 0.0m