Docker in Docker enabled, initializing... ================================================================================ Starting Docker: docker. ================================================================================ Done setting up docker in docker. + WRAPPED_COMMAND_PID=155 + wait 155 + ./scripts/ci-e2e.sh mkdir -p /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin rm -f "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl*" curl --retry 3 -fsL https://storage.googleapis.com/kubernetes-release/release/v1.22.4/bin/linux/amd64/kubectl -o /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 ln -sf /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl chmod +x /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl kind not found, installing installing Azure CLI Get:1 http://deb.debian.org/debian buster InRelease [122 kB] Get:2 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB] Get:3 http://deb.debian.org/debian buster-updates InRelease [51.9 kB] Get:4 https://download.docker.com/linux/debian buster InRelease [54.0 kB] Get:5 http://deb.debian.org/debian buster/main amd64 Packages [7911 kB] Get:6 http://security.debian.org/debian-security buster/updates/main amd64 Packages [319 kB] Get:7 http://deb.debian.org/debian buster-updates/main amd64 Packages [8796 B] Get:8 https://download.docker.com/linux/debian buster/stable amd64 Packages [26.2 kB] Fetched 8558 kB in 2s (4494 kB/s) Reading package lists... Reading package lists... Building dependency tree... Reading state information... apt-transport-https is already the newest version (1.8.2.3). ca-certificates is already the newest version (20200601~deb10u2). curl is already the newest version (7.64.0-4+deb10u2). gnupg is already the newest version (2.2.12-1+deb10u1). gnupg set to manually installed. lsb-release is already the newest version (10.2019051400). 0 upgraded, 0 newly installed, 0 to remove and 7 not upgraded. deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ buster main Hit:1 http://security.debian.org/debian-security buster/updates InRelease Hit:2 https://download.docker.com/linux/debian buster InRelease Hit:3 http://deb.debian.org/debian buster InRelease Hit:4 http://deb.debian.org/debian buster-updates InRelease Get:5 https://packages.microsoft.com/repos/azure-cli buster InRelease [29.7 kB] Get:6 https://packages.microsoft.com/repos/azure-cli buster/main all Packages [11.2 kB] Get:7 https://packages.microsoft.com/repos/azure-cli buster/main amd64 Packages [11.5 kB] Fetched 52.4 kB in 1s (63.2 kB/s) Reading package lists... Reading package lists... Building dependency tree... Reading state information... The following NEW packages will be installed: azure-cli 0 upgraded, 1 newly installed, 0 to remove and 7 not upgraded. Need to get 75.7 MB of archives. After this operation, 1092 MB of additional disk space will be used. Get:1 https://packages.microsoft.com/repos/azure-cli buster/main amd64 azure-cli all 2.36.0-1~buster [75.7 MB] debconf: delaying package configuration, since apt-utils is not installed Fetched 75.7 MB in 2s (36.6 MB/s) Selecting previously unselected package azure-cli. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 20709 files and directories currently installed.) Preparing to unpack .../azure-cli_2.36.0-1~buster_all.deb ... Unpacking azure-cli (2.36.0-1~buster) ... Setting up azure-cli (2.36.0-1~buster) ... Login Succeeded WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded generating sshkey for e2e PULL_POLICY=IfNotPresent MANAGER_IMAGE=capzci.azurecr.io/cluster-api-azure-controller-amd64:20220518170743 \ make docker-build docker-push \ test-e2e-run make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' docker pull docker/dockerfile:1.1-experimental 1.1-experimental: Pulling from docker/dockerfile 612615616619: Pulling fs layer 612615616619: Download complete 612615616619: Pull complete Digest: sha256:de85b2f3a3e8a2f7fe48e8e84a65f6fdd5cd5183afa6412fff9caa6871649c44 Status: Downloaded newer image for docker/dockerfile:1.1-experimental docker.io/docker/dockerfile:1.1-experimental docker pull docker.io/library/golang:1.17 1.17: Pulling from library/golang 67e8aa6c8bbc: Pulling fs layer 627e6c1e1055: Pulling fs layer 0670968926f6: Pulling fs layer 5a8b0e20be4b: Pulling fs layer 10f766b17f53: Pulling fs layer d50395ad3fff: Pulling fs layer 5edebd472405: Pulling fs layer 10f766b17f53: Waiting d50395ad3fff: Waiting 5edebd472405: Waiting 5a8b0e20be4b: Waiting 627e6c1e1055: Verifying Checksum 627e6c1e1055: Download complete 0670968926f6: Download complete 67e8aa6c8bbc: Verifying Checksum 67e8aa6c8bbc: Download complete 5a8b0e20be4b: Verifying Checksum 5a8b0e20be4b: Download complete 5edebd472405: Verifying Checksum 5edebd472405: Download complete 10f766b17f53: Verifying Checksum 10f766b17f53: Download complete d50395ad3fff: Verifying Checksum d50395ad3fff: Download complete 67e8aa6c8bbc: Pull complete 627e6c1e1055: Pull complete 0670968926f6: Pull complete 5a8b0e20be4b: Pull complete 10f766b17f53: Pull complete d50395ad3fff: Pull complete 5edebd472405: Pull complete Digest: sha256:79138c839452a2a9d767f0bba601bd5f63af4a1d8bb645bf6141bff8f4f33bb8 Status: Downloaded newer image for golang:1.17 docker.io/library/golang:1.17 docker pull gcr.io/distroless/static:latest latest: Pulling from distroless/static 36698cfa5275: Pulling fs layer 36698cfa5275: Verifying Checksum 36698cfa5275: Download complete 36698cfa5275: Pull complete Digest: sha256:d6fa9db9548b5772860fecddb11d84f9ebd7e0321c0cb3c02870402680cc315f Status: Downloaded newer image for gcr.io/distroless/static:latest gcr.io/distroless/static:latest DOCKER_BUILDKIT=1 docker build --build-arg goproxy=https://proxy.golang.org --build-arg ARCH=amd64 --build-arg ldflags="-X 'sigs.k8s.io/cluster-api-provider-azure/version.buildDate=2022-05-17T21:02:31Z' -X 'sigs.k8s.io/cluster-api-provider-azure/version.gitCommit=04d23a031abc338d0946511cd1dd2fb3f6ed7c70' -X 'sigs.k8s.io/cluster-api-provider-azure/version.gitTreeState=clean' -X 'sigs.k8s.io/cluster-api-provider-azure/version.gitMajor=1' -X 'sigs.k8s.io/cluster-api-provider-azure/version.gitMinor=3' -X 'sigs.k8s.io/cluster-api-provider-azure/version.gitVersion=v1.3.0-9-04d23a031abc33'" . -t capzci.azurecr.io/cluster-api-azure-controller-amd64:20220518170743 #1 [internal] load build definition from Dockerfile #1 sha256:dc10496f3c672557f1d9c3ac9ba9caee6a44c2d03a2df48e31d2049b07383a5d #1 transferring dockerfile: 2.08kB 0.0s done #1 DONE 0.0s #2 [internal] load .dockerignore #2 sha256:79cb08d92f0ea7925fc1468cbfc61f3bffea4d334ef64928565c5f3437a701b1 #2 transferring context: 204B done #2 DONE 0.0s #3 resolve image config for docker.io/docker/dockerfile:1.1-experimental #3 sha256:5e049a138e8d4cf2e79a9da24ccc4d95fdecddea22b8e833c6eceb941d5ebfc3 #3 DONE 0.0s #4 docker-image://docker.io/docker/dockerfile:1.1-experimental #4 sha256:44995514c833e6fdd1510efb354b1a8f88938d7dd5fdfa7ae7673244e5c6ed81 #4 DONE 0.0s #5 [internal] load build definition from Dockerfile #5 sha256:7155655139f3e5416ace3e05c689c59f15ecb2d7c90873703844f5d39e429ee5 #5 transferring dockerfile: 2.08kB done #5 DONE 0.0s #6 [internal] load metadata for docker.io/library/golang:1.17 #6 sha256:85d71333fef77c424238254d83208db5f9cca4e89afebc9089673b74f68e11e2 #6 DONE 0.0s #7 [internal] load metadata for gcr.io/distroless/static:nonroot #7 sha256:1edcc8e618a78dfea82763afe12cfe9a8596997b9300f45c9a79c5da0d617755 #7 DONE 0.3s #8 [stage-1 1/3] FROM gcr.io/distroless/static:nonroot@sha256:2556293984c5738fc75208cce52cf0a4762c709cf38e4bf8def65a61992da0ad #8 sha256:7affd7c68e228d0abf587788ce7d1ae068350ec059b6ba6582f79943ffe94f1e #8 resolve gcr.io/distroless/static:nonroot@sha256:2556293984c5738fc75208cce52cf0a4762c709cf38e4bf8def65a61992da0ad 0.0s done #8 sha256:2556293984c5738fc75208cce52cf0a4762c709cf38e4bf8def65a61992da0ad 1.67kB / 1.67kB done #8 sha256:abb120b4ebb4e734dc4c82181bb32b5a4f8bcf78e4540b094b9d59ba7a29a5ad 426B / 426B done #8 sha256:bbd57f9cdb20afb520e49785bd05fc089700862d740b059432c3cd21c059d708 478B / 478B done #8 DONE 0.1s #11 [internal] load build context #11 sha256:c51634716d9e6002461930f2d186e6e069b3fabf91023e07c5d9efc95c7e0d3e #11 transferring context: 3.70MB 0.1s done #11 DONE 0.2s #9 [builder 1/8] FROM docker.io/library/golang:1.17 #9 sha256:3b61066afeee46350c5f7c191c05b94bcd32c717224da2a6f8fdbaabd821801d #9 DONE 0.2s #10 [builder 2/8] WORKDIR /workspace #10 sha256:ec97eb590b8e49ea0c06b38fa084d077e4d7b9433dfe268504fec0f37cec0da9 #10 DONE 0.0s #12 [builder 3/8] COPY go.mod go.mod #12 sha256:905e5054a80411352ef4db3ce0142429ec0bb73b22658bf4b757592b2ec43577 #12 DONE 0.0s #13 [builder 4/8] COPY go.sum go.sum #13 sha256:bc7baed916c64433739261b4210d9364143f42c474671fd767e8d07177dccf42 #13 DONE 0.0s #14 [builder 5/8] RUN --mount=type=cache,target=/go/pkg/mod go mod download #14 sha256:b3af725a4e76a35e1c415132f3f27a056c54dce28d7db2858d0704162e356f88 #14 DONE 22.5s #15 [builder 6/8] COPY ./ ./ #15 sha256:1ffd1a8811fe85b728a92ae7a19c56b86ed63cecd67bf4b64b444d0b78c34089 #15 DONE 0.1s #16 [builder 7/8] RUN --mount=type=cache,target=/root/.cache/go-build --mount=type=cache,target=/go/pkg/mod go build . #16 sha256:9234a9891efa055d3a7594f57a704828f41a48e441f5e50ed97a6c86385eb110 #16 DONE 67.0s #17 [builder 8/8] RUN --mount=type=cache,target=/root/.cache/go-build --mount=type=cache,target=/go/pkg/mod CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags "-X 'sigs.k8s.io/cluster-api-provider-azure/version.buildDate=2022-05-17T21:02:31Z' -X 'sigs.k8s.io/cluster-api-provider-azure/version.gitCommit=04d23a031abc338d0946511cd1dd2fb3f6ed7c70' -X 'sigs.k8s.io/cluster-api-provider-azure/version.gitTreeState=clean' -X 'sigs.k8s.io/cluster-api-provider-azure/version.gitMajor=1' -X 'sigs.k8s.io/cluster-api-provider-azure/version.gitMinor=3' -X 'sigs.k8s.io/cluster-api-provider-azure/version.gitVersion=v1.3.0-9-04d23a031abc33' -extldflags '-static'" -o manager . #17 sha256:2745d7daa4f6e5f66f07e679abca5d6300636e895cf3faf19d35114dae934921 #17 DONE 43.8s #18 [stage-1 2/3] COPY --from=builder /workspace/manager . #18 sha256:5bd8184982de17ca185d36f6cc3b001e7fd05f904ff8c1acefadca72f501f75f #18 DONE 0.3s #19 exporting to image #19 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00 #19 exporting layers #19 exporting layers 0.4s done #19 writing image sha256:98b392bc223e35b3de2037af459f15d01053808f0fc070b8c933c39b0f000fc3 done #19 naming to capzci.azurecr.io/cluster-api-azure-controller-amd64:20220518170743 done #19 DONE 0.4s make set-manifest-image MANIFEST_IMG=capzci.azurecr.io/cluster-api-azure-controller-amd64 MANIFEST_TAG=20220518170743 TARGET_RESOURCE="./config/default/manager_image_patch.yaml" make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' Updating kustomize image patch file for default resource sed -i'' -e 's@image: .*@image: '"capzci.azurecr.io/cluster-api-azure-controller-amd64:20220518170743"'@' ./config/default/manager_image_patch.yaml make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make set-manifest-pull-policy TARGET_RESOURCE="./config/default/manager_pull_policy.yaml" make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' Updating kustomize pull policy file for default resource sed -i'' -e 's@imagePullPolicy: .*@imagePullPolicy: '"IfNotPresent"'@' ./config/default/manager_pull_policy.yaml make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' docker push capzci.azurecr.io/cluster-api-azure-controller-amd64:20220518170743 The push refers to repository [capzci.azurecr.io/cluster-api-azure-controller-amd64] 5ffed9c05d85: Preparing 0b031aac6569: Preparing 0b031aac6569: Layer already exists 5ffed9c05d85: Pushed 20220518170743: digest: sha256:e24e1df9f7a1f4e8d497eced30f7f9212da858297ee9979d2ace115a834c5767 size: 739 GOBIN=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin ./scripts/go_install.sh sigs.k8s.io/kustomize/kustomize/v4 kustomize v4.5.2 rm: cannot remove '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kustomize*': No such file or directory go: downloading sigs.k8s.io/kustomize/kustomize/v4 v4.5.2 go: downloading sigs.k8s.io/kustomize/cmd/config v0.10.4 go: downloading github.com/spf13/cobra v1.2.1 go: downloading sigs.k8s.io/kustomize/api v0.11.2 go: downloading sigs.k8s.io/kustomize/kyaml v0.13.3 go: downloading github.com/spf13/pflag v1.0.5 go: downloading sigs.k8s.io/yaml v1.2.0 go: downloading gopkg.in/yaml.v2 v2.4.0 go: downloading github.com/pkg/errors v0.9.1 go: downloading github.com/go-errors/errors v1.0.1 go: downloading github.com/olekukonko/tablewriter v0.0.4 go: downloading k8s.io/kube-openapi v0.0.0-20210421082810-95288971da7e go: downloading github.com/evanphx/json-patch v4.11.0+incompatible go: downloading github.com/imdario/mergo v0.3.5 go: downloading github.com/davecgh/go-spew v1.1.1 go: downloading github.com/stretchr/testify v1.7.0 go: downloading github.com/xlab/treeprint v0.0.0-20181112141820-a009c3971eca go: downloading github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 go: downloading gopkg.in/inf.v0 v0.9.1 go: downloading github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 go: downloading github.com/mattn/go-runewidth v0.0.7 go: downloading go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5 go: downloading github.com/pmezard/go-difflib v1.0.0 go: downloading gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b go: downloading github.com/mitchellh/mapstructure v1.4.1 go: downloading github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a go: downloading github.com/go-openapi/swag v0.19.5 go: downloading github.com/go-openapi/jsonreference v0.19.3 go: downloading github.com/go-openapi/jsonpointer v0.19.3 go: downloading github.com/PuerkitoBio/purell v1.1.1 go: downloading github.com/mailru/easyjson v0.7.0 go: downloading golang.org/x/text v0.3.5 go: downloading github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 go: downloading golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4 /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template.yaml /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template-md-remediation --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template-md-remediation.yaml /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template-kcp-remediation --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template-kcp-remediation.yaml /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template-kcp-adoption/step1 --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template-kcp-adoption.yaml echo "---" >> /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template-kcp-adoption.yaml /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template-kcp-adoption/step2 --load-restrictor LoadRestrictionsNone >> /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template-kcp-adoption.yaml /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template-machine-pool --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template-machine-pool.yaml /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template-node-drain --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template-node-drain.yaml /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template-upgrades --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template-upgrades.yaml /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/infrastructure-azure/v1beta1/cluster-template-kcp-scale-in.yaml GOBIN=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin ./scripts/go_install.sh github.com/drone/envsubst/v2/cmd/envsubst envsubst v2.0.0-20210730161058-179042472c46 rm: cannot remove '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/envsubst*': No such file or directory go: downloading github.com/drone/envsubst/v2 v2.0.0-20210730161058-179042472c46 mkdir -p /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin rm -f "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/helm*" curl -fsSL -o /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 chmod 700 /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/get_helm.sh USE_SUDO=false HELM_INSTALL_DIR=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin DESIRED_VERSION=v3.8.1 BINARY_NAME=helm-v3.8.1 /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/get_helm.sh Downloading https://get.helm.sh/helm-v3.8.1-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm-v3.8.1 into /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin helm-v3.8.1 installed into /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/helm-v3.8.1 ln -sf /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/helm-v3.8.1 /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/helm rm -f /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/get_helm.sh GOBIN=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin ./scripts/go_install.sh github.com/onsi/ginkgo/ginkgo ginkgo v1.16.5 rm: cannot remove '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/ginkgo*': No such file or directory go: downloading github.com/onsi/ginkgo v1.16.5 go: downloading github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0 go: downloading golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e go: downloading github.com/nxadm/tail v1.4.8 go: downloading golang.org/x/sys v0.0.0-20210112080510-489259a85091 go: downloading gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 go: downloading github.com/fsnotify/fsnotify v1.4.9 /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/envsubst-v2.0.0-20210730161058-179042472c46 < /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/config/azure-dev.yaml > /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/config/azure-dev-envsubst.yaml && \ /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/ginkgo-v1.16.5 -v -trace -tags=e2e -focus="API Version Upgrade" -skip="" -nodes=3 --noColor=false ./test/e2e -- \ -e2e.artifacts-folder="/logs/artifacts" \ -e2e.config="/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/config/azure-dev-envsubst.yaml" \ -e2e.skip-resource-cleanup=false -e2e.use-existing-cluster=false go: downloading github.com/Azure/azure-sdk-for-go v58.1.0+incompatible go: downloading github.com/Azure/go-autorest/autorest v0.11.23 go: downloading github.com/Azure/go-autorest v14.2.0+incompatible go: downloading github.com/Azure/go-autorest/autorest/azure/auth v0.5.10 go: downloading github.com/Azure/go-autorest/autorest/to v0.4.0 go: downloading github.com/blang/semver v3.5.1+incompatible go: downloading github.com/hashicorp/go-retryablehttp v0.7.0 go: downloading github.com/onsi/gomega v1.17.0 go: downloading golang.org/x/crypto v0.0.0-20211117183948-ae814b36b871 go: downloading golang.org/x/mod v0.5.1 go: downloading helm.sh/helm/v3 v3.8.1 go: downloading k8s.io/api v0.23.4 go: downloading k8s.io/apimachinery v0.23.4 go: downloading k8s.io/client-go v0.23.4 go: downloading k8s.io/utils v0.0.0-20211116205334-6203023598ed go: downloading sigs.k8s.io/cluster-api v1.1.1 go: downloading sigs.k8s.io/cluster-api/test v1.1.2 go: downloading sigs.k8s.io/controller-runtime v0.11.1 go: downloading sigs.k8s.io/kind v0.11.1 go: downloading github.com/Azure/aad-pod-identity v1.8.6 go: downloading github.com/Azure/go-autorest/logger v0.2.1 go: downloading github.com/Azure/go-autorest/tracing v0.6.0 go: downloading github.com/Azure/go-autorest/autorest/adal v0.9.18 go: downloading github.com/Azure/go-autorest/autorest/azure/cli v0.4.2 go: downloading github.com/dimchansky/utfbom v1.1.1 go: downloading github.com/hashicorp/go-cleanhttp v0.5.2 go: downloading github.com/Masterminds/semver/v3 v3.1.1 go: downloading github.com/Masterminds/sprig/v3 v3.2.2 go: downloading github.com/gosuri/uitable v0.0.4 go: downloading golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 go: downloading k8s.io/cli-runtime v0.23.4 go: downloading sigs.k8s.io/yaml v1.3.0 go: downloading github.com/gogo/protobuf v1.3.2 go: downloading github.com/google/gofuzz v1.2.0 go: downloading golang.org/x/net v0.0.0-20220107192237-5cfca573fb4d go: downloading k8s.io/klog/v2 v2.30.0 go: downloading github.com/imdario/mergo v0.3.12 go: downloading github.com/google/uuid v1.3.0 go: downloading github.com/asaskevich/govalidator v0.0.0-20210307081110-f21760c49a8d go: downloading github.com/google/go-cmp v0.5.7 go: downloading k8s.io/kubectl v0.23.4 go: downloading k8s.io/apiextensions-apiserver v0.23.4 go: downloading github.com/gobuffalo/flect v0.2.4 go: downloading github.com/evanphx/json-patch v4.12.0+incompatible go: downloading github.com/Azure/go-autorest/autorest/date v0.3.0 go: downloading github.com/Azure/go-autorest/autorest/validation v0.3.1 go: downloading sigs.k8s.io/structured-merge-diff/v4 v4.2.1 go: downloading golang.org/x/sys v0.0.0-20220114195835-da31bd327af9 go: downloading github.com/mitchellh/go-homedir v1.1.0 go: downloading github.com/cyphar/filepath-securejoin v0.2.3 go: downloading github.com/mitchellh/copystructure v1.2.0 go: downloading github.com/xeipuuv/gojsonschema v1.2.0 go: downloading github.com/golang-jwt/jwt/v4 v4.0.0 go: downloading github.com/BurntSushi/toml v0.4.1 go: downloading github.com/gobwas/glob v0.2.3 go: downloading github.com/containerd/containerd v1.5.9 go: downloading github.com/opencontainers/image-spec v1.0.2 go: downloading github.com/sirupsen/logrus v1.8.1 go: downloading oras.land/oras-go v1.1.0 go: downloading github.com/Masterminds/squirrel v1.5.2 go: downloading github.com/jmoiron/sqlx v1.3.4 go: downloading github.com/lib/pq v1.10.4 go: downloading github.com/rubenv/sql-migrate v0.0.0-20210614095031-55d5740dbbcc go: downloading github.com/golang/protobuf v1.5.2 go: downloading github.com/googleapis/gnostic v0.5.5 go: downloading github.com/fatih/color v1.13.0 go: downloading github.com/Masterminds/goutils v1.1.1 go: downloading github.com/huandu/xstrings v1.3.2 go: downloading github.com/shopspring/decimal v1.2.0 go: downloading github.com/spf13/cast v1.4.1 go: downloading golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac go: downloading golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 go: downloading github.com/go-logr/logr v1.2.2 go: downloading gomodules.xyz/jsonpatch/v2 v2.2.0 go: downloading go.opentelemetry.io/otel v1.4.0 go: downloading go.opentelemetry.io/otel/trace v1.4.0 go: downloading k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65 go: downloading github.com/spf13/cobra v1.3.0 go: downloading k8s.io/component-base v0.23.4 go: downloading k8s.io/cluster-bootstrap v0.23.0 go: downloading github.com/spf13/viper v1.10.0 go: downloading k8s.io/apiserver v0.23.4 go: downloading github.com/coredns/corefile-migration v1.0.14 go: downloading github.com/evanphx/json-patch/v5 v5.6.0 go: downloading github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de go: downloading github.com/docker/docker v20.10.12+incompatible go: downloading github.com/docker/go-connections v0.4.0 go: downloading github.com/docker/distribution v2.7.1+incompatible go: downloading golang.org/x/text v0.3.7 go: downloading sigs.k8s.io/kustomize/kyaml v0.13.0 go: downloading sigs.k8s.io/kustomize/api v0.10.1 go: downloading github.com/alessio/shellescape v1.4.1 go: downloading sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6 go: downloading github.com/json-iterator/go v1.1.12 go: downloading github.com/mitchellh/reflectwalk v1.0.2 go: downloading github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 go: downloading github.com/opencontainers/go-digest v1.0.0 go: downloading github.com/docker/cli v20.10.11+incompatible go: downloading golang.org/x/sync v0.0.0-20210220032951-036812b2e83c go: downloading github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 go: downloading gopkg.in/gorp.v1 v1.7.2 go: downloading google.golang.org/protobuf v1.27.1 go: downloading github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d go: downloading github.com/mattn/go-runewidth v0.0.13 go: downloading github.com/mattn/go-colorable v0.1.12 go: downloading github.com/mattn/go-isatty v0.0.14 go: downloading github.com/prometheus/client_golang v1.12.1 go: downloading github.com/fsnotify/fsnotify v1.5.1 go: downloading github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7 go: downloading github.com/peterbourgon/diskv v2.0.1+incompatible go: downloading github.com/magiconair/properties v1.8.5 go: downloading github.com/mitchellh/mapstructure v1.4.3 go: downloading github.com/spf13/afero v1.6.0 go: downloading github.com/spf13/jwalterweatherman v1.1.0 go: downloading gopkg.in/ini.v1 v1.66.2 go: downloading github.com/subosito/gotenv v1.2.0 go: downloading github.com/docker/go-units v0.4.0 go: downloading github.com/google/go-github/v33 v33.0.0 go: downloading github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f go: downloading github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd go: downloading github.com/modern-go/reflect2 v1.0.2 go: downloading github.com/moby/locker v1.0.1 go: downloading github.com/docker/docker-credential-helpers v0.6.4 go: downloading google.golang.org/grpc v1.44.0 go: downloading github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 go: downloading github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5 go: downloading github.com/MakeNowJust/heredoc v1.0.0 go: downloading github.com/russross/blackfriday v1.5.2 go: downloading github.com/rivo/uniseg v0.2.0 go: downloading github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da go: downloading github.com/google/btree v1.0.1 go: downloading github.com/go-logr/stdr v1.2.2 go: downloading github.com/prometheus/client_model v0.2.0 go: downloading github.com/prometheus/common v0.32.1 go: downloading github.com/beorn7/perks v1.0.1 go: downloading github.com/cespare/xxhash/v2 v2.1.2 go: downloading github.com/prometheus/procfs v0.7.3 go: downloading github.com/hashicorp/hcl v1.0.0 go: downloading github.com/pelletier/go-toml v1.9.4 go: downloading github.com/coredns/caddy v1.1.0 go: downloading github.com/moby/term v0.0.0-20210610120745-9d4ed1856297 go: downloading github.com/morikuni/aec v1.0.0 go: downloading github.com/klauspost/compress v1.13.6 go: downloading github.com/go-openapi/jsonreference v0.19.5 go: downloading github.com/go-openapi/swag v0.19.14 go: downloading github.com/mitchellh/go-wordwrap v1.0.0 go: downloading github.com/moby/spdystream v0.2.0 go: downloading github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 go: downloading github.com/google/go-querystring v1.0.0 go: downloading github.com/go-openapi/jsonpointer v0.19.5 go: downloading github.com/mailru/easyjson v0.7.6 go: downloading github.com/valyala/fastjson v1.6.3 go: downloading github.com/gorilla/mux v1.8.0 go: downloading github.com/google/cel-go v0.9.0 go: downloading google.golang.org/genproto v0.0.0-20220107163113-42d7afdf6368 go: downloading github.com/docker/go-metrics v0.0.1 go: downloading github.com/josharian/intern v1.0.0 go: downloading github.com/stoewer/go-strcase v1.2.0 go: downloading github.com/antlr/antlr4/runtime/Go/antlr v0.0.0-20210826220005-b48c857c3a0e Running Suite: capz-e2e ======================= Random Seed: 1652893856 Will run 24 specs Running in parallel across 3 nodes STEP: Finding image skus for offer cncf-upstream/capi in uksouth STEP: Finding image skus for offer cncf-upstream/capi-windows in uksouth SSSSSSSSSS ------------------------------ STEP: Finding image skus for offer cncf-upstream/capi in uksouth STEP: Finding image skus for offer cncf-upstream/capi-windows in uksouth STEP: Initializing a runtime.Scheme with all the GVK relevant for this test STEP: Loading the e2e test configuration from "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/config/azure-dev-envsubst.yaml" STEP: Finding image skus for offer cncf-upstream/capi in uksouth STEP: Finding image skus for offer cncf-upstream/capi-windows in uksouth STEP: Creating a clusterctl local repository into "/logs/artifacts" STEP: Reading the ClusterResourceSet manifest /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/templates/addons/calico.yaml STEP: Setting up the bootstrap cluster INFO: Creating a kind cluster with name "capz-e2e" Creating cluster "capz-e2e" ... • Ensuring node image (kindest/node:v1.23.3) 🖼 ... WARNING: Overriding docker network due to KIND_EXPERIMENTAL_DOCKER_NETWORK WARNING: Here be dragons! This is not supported currently. ✓ Ensuring node image (kindest/node:v1.23.3) 🖼 • Preparing nodes 📦 ... ✓ Preparing nodes 📦 • Writing configuration 📜 ... ✓ Writing configuration 📜 • Starting control-plane 🕹️ ... ✓ Starting control-plane 🕹️ • Installing CNI 🔌 ... ✓ Installing CNI 🔌 • Installing StorageClass 💾 ... ✓ Installing StorageClass 💾 INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind755682830 INFO: Loading image: "capzci.azurecr.io/cluster-api-azure-controller-amd64:20220518170743" INFO: Loading image: "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2" INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2" to "/tmp/image-tar2120748920/image.tar": unable to read image data: Error response from daemon: reference does not exist INFO: Loading image: "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" to "/tmp/image-tar361014769/image.tar": unable to read image data: Error response from daemon: reference does not exist INFO: Loading image: "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" to "/tmp/image-tar1293562109/image.tar": unable to read image data: Error response from daemon: reference does not exist STEP: Initializing the bootstrap cluster INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure azure INFO: Waiting for provider controllers to be running STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-6984cdc687-mgcnt, container manager STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-5b6fb7d684-jhvbn, container manager STEP: Waiting for deployment capi-system/capi-controller-manager to be available INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-674bcdd5ff-wnfzt, container manager STEP: Waiting for deployment capz-system/capz-controller-manager to be available INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-6c4bb89c78-hvfjz, container manager STEP: Finding image skus for offer cncf-upstream/capi in uksouth STEP: Finding image skus for offer cncf-upstream/capi-windows in uksouth SSSSSSSSSSSS ------------------------------ Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3  Should create a management cluster and then upgrade all the providers /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:147 STEP: Creating a namespace for hosting the "clusterctl-upgrade" test spec INFO: Creating namespace clusterctl-upgrade-kklo8o INFO: Creating event watcher for namespace "clusterctl-upgrade-kklo8o" STEP: Creating a workload cluster to be used as a new management cluster INFO: Creating the workload cluster with name "clusterctl-upgrade-ah9et1" using the "(default)" template (Kubernetes v1.21.2, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster clusterctl-upgrade-ah9et1 --infrastructure (default) --kubernetes-version v1.21.2 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/clusterctl-upgrade-ah9et1 created azurecluster.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-ah9et1 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-control-plane created machinedeployment.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-md-0 created machinedeployment.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-md-win created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-md-win created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-md-win created machinehealthcheck.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-mhc-0 created clusterresourceset.addons.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-calico created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created clusterresourceset.addons.cluster.x-k8s.io/containerd-logger-clusterctl-upgrade-ah9et1 created configmap/cni-clusterctl-upgrade-ah9et1-calico created configmap/csi-proxy-addon created configmap/containerd-logger-clusterctl-upgrade-ah9et1 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by clusterctl-upgrade-kklo8o/clusterctl-upgrade-ah9et1-control-plane to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane clusterctl-upgrade-kklo8o/clusterctl-upgrade-ah9et1-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist STEP: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned STEP: Turning the workload cluster into a management cluster with older versions of providers INFO: Downloading clusterctl binary from https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.23/clusterctl-linux-amd64 STEP: Initializing the workload cluster with older versions of providers INFO: clusterctl init --core cluster-api:v0.3.23 --bootstrap kubeadm:v0.3.23 --control-plane kubeadm:v0.3.23 --infrastructure azure:v0.4.15 INFO: Waiting for provider controllers to be running STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-5c4d4c9db4-h49qx, container kube-rbac-proxy INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-5c4d4c9db4-h49qx, container manager STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-685446d8d8-4r4mt, container kube-rbac-proxy INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-685446d8d8-4r4mt, container manager STEP: Waiting for deployment capi-system/capi-controller-manager to be available INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-7bc9769778-tjcl2, container kube-rbac-proxy INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-7bc9769778-tjcl2, container manager STEP: Waiting for deployment capi-webhook-system/capi-controller-manager to be available INFO: Creating log watcher for controller capi-webhook-system/capi-controller-manager, pod capi-controller-manager-d98d75d79-767pj, container kube-rbac-proxy INFO: Creating log watcher for controller capi-webhook-system/capi-controller-manager, pod capi-controller-manager-d98d75d79-767pj, container manager STEP: Waiting for deployment capi-webhook-system/capi-kubeadm-bootstrap-controller-manager to be available INFO: Creating log watcher for controller capi-webhook-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-7b5976cb87-lgnqq, container kube-rbac-proxy INFO: Creating log watcher for controller capi-webhook-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-7b5976cb87-lgnqq, container manager STEP: Waiting for deployment capi-webhook-system/capi-kubeadm-control-plane-controller-manager to be available INFO: Creating log watcher for controller capi-webhook-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-5c78576f9c-84cd5, container kube-rbac-proxy INFO: Creating log watcher for controller capi-webhook-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-5c78576f9c-84cd5, container manager STEP: Waiting for deployment capi-webhook-system/capz-controller-manager to be available INFO: Creating log watcher for controller capi-webhook-system/capz-controller-manager, pod capz-controller-manager-55f9c97c75-kbmfc, container kube-rbac-proxy INFO: Creating log watcher for controller capi-webhook-system/capz-controller-manager, pod capz-controller-manager-55f9c97c75-kbmfc, container manager STEP: Waiting for deployment capz-system/capz-controller-manager to be available INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-58d8469fdb-5p84q, container kube-rbac-proxy INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-58d8469fdb-5p84q, container manager STEP: THE MANAGEMENT CLUSTER WITH THE OLDER VERSION OF PROVIDERS IS UP&RUNNING! STEP: Creating a namespace for hosting the clusterctl-upgrade test workload cluster INFO: Creating namespace clusterctl-upgrade INFO: Creating event watcher for namespace "clusterctl-upgrade" STEP: Creating a test workload cluster INFO: Creating the workload cluster with name "clusterctl-upgrade-vl3rki" using the "(default)" template (Kubernetes v1.22.9, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: Detect clusterctl version via: clusterctl version INFO: clusterctl config cluster clusterctl-upgrade-vl3rki --infrastructure (default) --kubernetes-version v1.22.9 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Applying the cluster template yaml to the cluster Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.kubeadmcontrolplane.controlplane.cluster.x-k8s.io": Post "https://capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc:443/mutate-controlplane-cluster-x-k8s-io-v1alpha3-kubeadmcontrolplane?timeout=30s": dial tcp 10.103.103.76:443: connect: connection refused STEP: Deleting all cluster.x-k8s.io/v1alpha3 clusters in namespace clusterctl-upgrade in management cluster clusterctl-upgrade-ah9et1 STEP: Deleting cluster clusterctl-upgrade-vl3rki INFO: Waiting for the Cluster clusterctl-upgrade/clusterctl-upgrade-vl3rki to be deleted STEP: Waiting for cluster clusterctl-upgrade-vl3rki to be deleted STEP: Deleting cluster clusterctl-upgrade/clusterctl-upgrade-ah9et1 I0518 17:29:48.045990 30356 request.go:665] Waited for 1.134813021s due to client-side throttling, not priority and fairness, request: GET:https://clusterctl-upgrade-ah9et1-99e942d.uksouth.cloudapp.azure.com:6443/apis/policy/v1?timeout=32s STEP: Redacting sensitive information from logs • Failure [688.035 seconds] Running the Cluster API E2E tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45 API Version Upgrade /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:202 upgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3 /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:203 Should create a management cluster and then upgrade all the providers [It] /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:147 Expected success, but got an error: <*errors.withStack | 0xc0004ab128>: { error: <*exec.ExitError | 0xc000f9bfc0>{ ProcessState: { pid: 34458, status: 256, rusage: { Utime: {Sec: 0, Usec: 430056}, Stime: {Sec: 0, Usec: 214828}, Maxrss: 95636, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 12672, Majflt: 0, Nswap: 0, Inblock: 0, Oublock: 25192, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 4872, Nivcsw: 403, }, }, Stderr: nil, }, stack: [0x2539955, 0x2539e7d, 0x26db52c, 0x2c2da0f, 0x15dee9a, 0x15de865, 0x15dd8fb, 0x15e41c9, 0x15e3ba7, 0x15f0f65, 0x15f0c85, 0x15f04c5, 0x15f27f2, 0x15ffd25, 0x15ffb3e, 0x2f913de, 0x1322e82, 0x125fb41], } exit status 1 /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:272 Full Stack Trace sigs.k8s.io/cluster-api/test/e2e.ClusterctlUpgradeSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:272 +0x1723 github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0x60) /home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/leafnodes/runner.go:113 +0xba github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0x0) /home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/leafnodes/runner.go:64 +0x125 github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x0) /home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/leafnodes/it_node.go:26 +0x7b github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00004f860, 0xc0005cd9c0, {0x3a1da60, 0xc00006e900}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/spec/spec.go:215 +0x2a9 github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00004f860, {0x3a1da60, 0xc00006e900}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/spec/spec.go:138 +0xe7 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc000559080, 0xc00004f860) /home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/specrunner/spec_runner.go:200 +0xe5 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc000559080) /home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/specrunner/spec_runner.go:170 +0x1a5 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc000559080) /home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/specrunner/spec_runner.go:66 +0xc5 github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001b4b60, {0x7f4c45f79c68, 0xc000683d40}, {0x3687040, 0x3174fa0}, {0xc0000a85c0, 0x2, 0x2}, {0x3a9a0d8, 0xc00006e900}, ...) /home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/suite/suite.go:79 +0x4d2 github.com/onsi/ginkgo.runSpecsWithCustomReporters({0x3a21380, 0xc000683d40}, {0x3687040, 0x8}, {0xc0000a8580, 0x2, 0x36acb56}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/ginkgo_dsl.go:245 +0x185 github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters({0x3a21380, 0xc000683d40}, {0x3687040, 0x8}, {0xc000099f20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/ginkgo_dsl.go:228 +0x1be sigs.k8s.io/cluster-api-provider-azure/test/e2e.TestE2E(0x0) /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:262 +0x19e testing.tRunner(0xc000683d40, 0x37b3f60) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a ------------------------------ Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:147 STEP: Creating a namespace for hosting the "clusterctl-upgrade" test spec INFO: Creating namespace clusterctl-upgrade-wtrork INFO: Creating event watcher for namespace "clusterctl-upgrade-wtrork" STEP: Creating a workload cluster to be used as a new management cluster INFO: Creating the workload cluster with name "clusterctl-upgrade-ryvha1" using the "(default)" template (Kubernetes v1.21.2, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster clusterctl-upgrade-ryvha1 --infrastructure (default) --kubernetes-version v1.21.2 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/clusterctl-upgrade-ryvha1 created azurecluster.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-ryvha1 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/clusterctl-upgrade-ryvha1-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-ryvha1-control-plane created machinedeployment.cluster.x-k8s.io/clusterctl-upgrade-ryvha1-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-ryvha1-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/clusterctl-upgrade-ryvha1-md-0 created machinedeployment.cluster.x-k8s.io/clusterctl-upgrade-ryvha1-md-win created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-ryvha1-md-win created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/clusterctl-upgrade-ryvha1-md-win created machinehealthcheck.cluster.x-k8s.io/clusterctl-upgrade-ryvha1-mhc-0 created clusterresourceset.addons.cluster.x-k8s.io/clusterctl-upgrade-ryvha1-calico created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created clusterresourceset.addons.cluster.x-k8s.io/containerd-logger-clusterctl-upgrade-ryvha1 created configmap/cni-clusterctl-upgrade-ryvha1-calico created configmap/csi-proxy-addon created configmap/containerd-logger-clusterctl-upgrade-ryvha1 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by clusterctl-upgrade-wtrork/clusterctl-upgrade-ryvha1-control-plane to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane clusterctl-upgrade-wtrork/clusterctl-upgrade-ryvha1-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist STEP: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned STEP: Turning the workload cluster into a management cluster with older versions of providers INFO: Downloading clusterctl binary from https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.4.7/clusterctl-linux-amd64 STEP: Initializing the workload cluster with older versions of providers STEP: Running Pre-init steps against the management cluster INFO: clusterctl init --core cluster-api:v0.4.7 --bootstrap kubeadm:v0.4.7 --control-plane kubeadm:v0.4.7 --infrastructure azure:v0.5.3 INFO: Waiting for provider controllers to be running STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-69948d997c-hcmwk, container manager STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-6568cd5ddb-wmcvn, container manager STEP: Waiting for deployment capi-system/capi-controller-manager to be available INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-b4744c594-rn7mj, container manager STEP: Waiting for deployment capz-system/capz-controller-manager to be available INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-6c95f546f6-rktd9, container manager STEP: THE MANAGEMENT CLUSTER WITH THE OLDER VERSION OF PROVIDERS IS UP&RUNNING! STEP: Creating a namespace for hosting the clusterctl-upgrade test workload cluster INFO: Creating namespace clusterctl-upgrade INFO: Creating event watcher for namespace "clusterctl-upgrade" STEP: Creating a test workload cluster INFO: Creating the workload cluster with name "clusterctl-upgrade-aj2d10" using the "(default)" template (Kubernetes v1.22.9, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: Detect clusterctl version via: clusterctl version INFO: clusterctl config cluster clusterctl-upgrade-aj2d10 --infrastructure (default) --kubernetes-version v1.22.9 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/clusterctl-upgrade-aj2d10 created azurecluster.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-aj2d10 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/clusterctl-upgrade-aj2d10-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-aj2d10-control-plane created machinedeployment.cluster.x-k8s.io/clusterctl-upgrade-aj2d10-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-aj2d10-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/clusterctl-upgrade-aj2d10-md-0 created machinehealthcheck.cluster.x-k8s.io/clusterctl-upgrade-aj2d10-mhc-0 created clusterresourceset.addons.cluster.x-k8s.io/clusterctl-upgrade-aj2d10-calico created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created configmap/cni-clusterctl-upgrade-aj2d10-calico created STEP: Waiting for the machines to exists STEP: THE MANAGEMENT CLUSTER WITH OLDER VERSION OF PROVIDERS WORKS! STEP: Upgrading providers to the latest version available INFO: clusterctl upgrade apply --contract v1beta1 INFO: Waiting for provider controllers to be running STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-65f85c657f-6jll7, container manager STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-9dd9b5b88-k2hfm, container manager STEP: Waiting for deployment capi-system/capi-controller-manager to be available INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-649ff448f9-xjdvk, container manager STEP: Waiting for deployment capz-system/capz-controller-manager to be available INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-798c4fc98-tstqd, container manager STEP: THE MANAGEMENT CLUSTER WAS SUCCESSFULLY UPGRADED! INFO: Scaling machine deployment clusterctl-upgrade/clusterctl-upgrade-aj2d10-md-0 from 1 to 2 replicas INFO: Waiting for correct number of replicas to exist STEP: THE UPGRADED MANAGEMENT CLUSTER WORKS! STEP: PASSED! STEP: Deleting all cluster.x-k8s.io/v1beta1 clusters in namespace clusterctl-upgrade in management cluster clusterctl-upgrade-ryvha1 STEP: Deleting cluster clusterctl-upgrade-aj2d10 INFO: Waiting for the Cluster clusterctl-upgrade/clusterctl-upgrade-aj2d10 to be deleted STEP: Waiting for cluster clusterctl-upgrade-aj2d10 to be deleted STEP: Deleting cluster clusterctl-upgrade/clusterctl-upgrade-ryvha1 STEP: Deleting namespace clusterctl-upgrade used for hosting the "clusterctl-upgrade" test INFO: Deleting namespace clusterctl-upgrade STEP: Deleting providers INFO: clusterctl delete --all STEP: Dumping logs from the "clusterctl-upgrade-ryvha1" workload cluster STEP: Dumping workload cluster clusterctl-upgrade-wtrork/clusterctl-upgrade-ryvha1 logs May 18 17:40:13.677: INFO: Collecting logs for Linux node clusterctl-upgrade-ryvha1-control-plane-wmc4b in cluster clusterctl-upgrade-ryvha1 in namespace clusterctl-upgrade-wtrork May 18 17:40:27.347: INFO: Collecting boot logs for AzureMachine clusterctl-upgrade-ryvha1-control-plane-wmc4b May 18 17:40:28.839: INFO: Collecting logs for Linux node clusterctl-upgrade-ryvha1-md-0-zbvtl in cluster clusterctl-upgrade-ryvha1 in namespace clusterctl-upgrade-wtrork May 18 17:41:13.406: INFO: Collecting boot logs for AzureMachine clusterctl-upgrade-ryvha1-md-0-zbvtl STEP: Dumping workload cluster clusterctl-upgrade-wtrork/clusterctl-upgrade-ryvha1 kube-system pod logs STEP: Fetching kube-system pod logs took 679.609202ms STEP: Dumping workload cluster clusterctl-upgrade-wtrork/clusterctl-upgrade-ryvha1 Azure activity log STEP: Creating log watcher for controller kube-system/etcd-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container etcd STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-zg9d9 STEP: Collecting events for Pod kube-system/etcd-clusterctl-upgrade-ryvha1-control-plane-wmc4b STEP: failed to find events of Pod "etcd-clusterctl-upgrade-ryvha1-control-plane-wmc4b" STEP: Creating log watcher for controller kube-system/kube-apiserver-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-apiserver STEP: Creating log watcher for controller kube-system/calico-node-62946, container calico-node STEP: Collecting events for Pod kube-system/calico-node-lnl7t STEP: Collecting events for Pod kube-system/calico-node-62946 STEP: Collecting events for Pod kube-system/kube-proxy-5sq62 STEP: Creating log watcher for controller kube-system/kube-proxy-mvzlj, container kube-proxy STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-mkz2p, container coredns STEP: Collecting events for Pod kube-system/kube-apiserver-clusterctl-upgrade-ryvha1-control-plane-wmc4b STEP: Creating log watcher for controller kube-system/kube-controller-manager-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-controller-manager STEP: Collecting events for Pod kube-system/coredns-558bd4d5db-mkz2p STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-v2vvm, container coredns STEP: Collecting events for Pod kube-system/kube-controller-manager-clusterctl-upgrade-ryvha1-control-plane-wmc4b STEP: Collecting events for Pod kube-system/kube-proxy-mvzlj STEP: Creating log watcher for controller kube-system/kube-proxy-5sq62, container kube-proxy STEP: failed to find events of Pod "kube-controller-manager-clusterctl-upgrade-ryvha1-control-plane-wmc4b" STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-zg9d9, container calico-kube-controllers STEP: Collecting events for Pod kube-system/coredns-558bd4d5db-v2vvm STEP: Creating log watcher for controller kube-system/calico-node-lnl7t, container calico-node STEP: Creating log watcher for controller kube-system/kube-scheduler-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-scheduler STEP: Collecting events for Pod kube-system/kube-scheduler-clusterctl-upgrade-ryvha1-control-plane-wmc4b STEP: failed to find events of Pod "kube-scheduler-clusterctl-upgrade-ryvha1-control-plane-wmc4b" STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName." STEP: Fetching activity logs took 233.056384ms STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-wtrork" namespace STEP: Deleting cluster clusterctl-upgrade-wtrork/clusterctl-upgrade-ryvha1 STEP: Deleting cluster clusterctl-upgrade-ryvha1 INFO: Waiting for the Cluster clusterctl-upgrade-wtrork/clusterctl-upgrade-ryvha1 to be deleted STEP: Waiting for cluster clusterctl-upgrade-ryvha1 to be deleted STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-v2vvm, container coredns: http2: client connection lost STEP: Got error while streaming logs for pod kube-system/etcd-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container etcd: http2: client connection lost STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-mkz2p, container coredns: http2: client connection lost STEP: Got error while streaming logs for pod kube-system/kube-proxy-mvzlj, container kube-proxy: http2: client connection lost STEP: Got error while streaming logs for pod kube-system/kube-scheduler-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-scheduler: http2: client connection lost STEP: Got error while streaming logs for pod kube-system/kube-proxy-5sq62, container kube-proxy: http2: client connection lost STEP: Got error while streaming logs for pod kube-system/kube-apiserver-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-apiserver: http2: client connection lost STEP: Got error while streaming logs for pod kube-system/calico-node-62946, container calico-node: http2: client connection lost STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-controller-manager: http2: client connection lost STEP: Got error while streaming logs for pod kube-system/calico-node-lnl7t, container calico-node: http2: client connection lost STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-zg9d9, container calico-kube-controllers: http2: client connection lost STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec INFO: Deleting namespace clusterctl-upgrade-wtrork STEP: Redacting sensitive information from logs • [SLOW TEST:1900.592 seconds] Running the Cluster API E2E tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45 API Version Upgrade /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:202 upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:231 Should create a management cluster and then upgrade all the providers /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:147 ------------------------------ STEP: Tearing down the management cluster Summarizing 1 Failure: [Fail] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3 [It] Should create a management cluster and then upgrade all the providers  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:272 Ran 2 of 24 Specs in 2256.868 seconds FAIL! -- 1 Passed | 1 Failed | 0 Pending | 22 Skipped Ginkgo ran 1 suite in 39m14.148017841s Test Suite Failed Ginkgo 2.0 is coming soon! ========================== Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. Please give the RC a try and send us feedback! - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta - To comment, chime in at https://github.com/onsi/ginkgo/issues/711 To silence this notice, set the environment variable: ACK_GINKGO_RC=true Alternatively you can: touch $HOME/.ack-ginkgo-rc make[1]: *** [Makefile:634: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:642: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker.