Cloning into '/home/prow/go/src/github.com/google/licenseclassifier'... Docker in Docker enabled, initializing... ================================================================================ * Starting Docker: docker ...done. Waiting for docker to be ready, sleeping for 1 seconds. Cleaning up binfmt_misc ... ================================================================================ Done setting up docker in docker. Activated service account credentials for: [prow-account@tekton-releases.iam.gserviceaccount.com] == Running ./runner.sh backward compatibility test runner === + [[ 5 -ne 0 ]] + case $1 in ++ cut -d = -f2 + gcloud auth activate-service-account --key-file=/etc/test-account/service-account.json Activated service account credentials for: [prow-account@tekton-releases.iam.gserviceaccount.com] + shift + [[ 4 -ne 0 ]] + case $1 in + shift + [[ -- == \-\- ]] + shift + break + ./test/presubmit-tests.sh --integration-tests Changed files in commit 950a7afabe29264f4484f967470afcca06211486: go.mod go.sum vendor/gorm.io/driver/mysql/README.md vendor/gorm.io/driver/mysql/error_translator.go vendor/gorm.io/driver/mysql/migrator.go vendor/gorm.io/driver/mysql/mysql.go vendor/gorm.io/gorm/.golangci.yml vendor/gorm.io/gorm/CODE_OF_CONDUCT.md vendor/gorm.io/gorm/LICENSE vendor/gorm.io/gorm/association.go vendor/gorm.io/gorm/callbacks.go vendor/gorm.io/gorm/callbacks/associations.go vendor/gorm.io/gorm/callbacks/create.go vendor/gorm.io/gorm/callbacks/delete.go vendor/gorm.io/gorm/callbacks/preload.go vendor/gorm.io/gorm/callbacks/query.go vendor/gorm.io/gorm/callbacks/raw.go vendor/gorm.io/gorm/callbacks/update.go vendor/gorm.io/gorm/chainable_api.go vendor/gorm.io/gorm/clause/joins.go vendor/gorm.io/gorm/clause/limit.go vendor/gorm.io/gorm/clause/returning.go vendor/gorm.io/gorm/clause/where.go vendor/gorm.io/gorm/errors.go vendor/gorm.io/gorm/finisher_api.go vendor/gorm.io/gorm/generics.go vendor/gorm.io/gorm/gorm.go vendor/gorm.io/gorm/internal/lru/lru.go vendor/gorm.io/gorm/internal/stmt_store/stmt_store.go vendor/gorm.io/gorm/logger/logger.go vendor/gorm.io/gorm/logger/sql.go vendor/gorm.io/gorm/migrator/migrator.go vendor/gorm.io/gorm/prepare_stmt.go vendor/gorm.io/gorm/scan.go vendor/gorm.io/gorm/schema/constraint.go vendor/gorm.io/gorm/schema/field.go vendor/gorm.io/gorm/schema/index.go vendor/gorm.io/gorm/schema/naming.go vendor/gorm.io/gorm/schema/relationship.go vendor/gorm.io/gorm/schema/schema.go vendor/gorm.io/gorm/schema/serializer.go vendor/gorm.io/gorm/schema/utils.go vendor/gorm.io/gorm/statement.go vendor/gorm.io/gorm/utils/utils.go vendor/modules.txt Updated property [component_manager/disable_update_check]. ============================ ==== CURRENT TEST SETUP ==== ============================ >> gcloud SDK version Google Cloud SDK 506.0.0 alpha 2025.01.10 beta 2025.01.10 bq 2.1.11 bundled-python3-unix 3.11.9 core 2025.01.10 docker-credential-gcr 1.5.0 gcloud-crc32c 1.0.0 gke-gcloud-auth-plugin 0.5.9 gsutil 5.33 kubectl 1.30.5 >> kubectl version Client Version: v1.32.0-alpha.0 Kustomize Version: v5.4.2 >> go version go version go1.23.4 linux/amd64 >> git version git version 2.43.0 >> docker version Client: Docker Engine - Community Version: 27.5.0 API version: 1.47 Go version: go1.22.10 Git commit: a187fa5 Built: Mon Jan 13 15:25:08 2025 OS/Arch: linux/amd64 Context: default Server: Docker Engine - Community Engine: Version: 27.5.0 API version: 1.47 (minimum version 1.24) Go version: go1.22.10 Git commit: 38b84dc Built: Mon Jan 13 15:25:08 2025 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.7.25 GitCommit: bcc810d6b9066471b0b6fa75f557a15a1cbf31bb runc: Version: 1.2.4 GitCommit: v1.2.4-0-g6c52b3f docker-init: Version: 0.19.0 GitCommit: de40ad0 =================================== ==== RUNNING INTEGRATION TESTS ==== =================================== /home/prow/go/src/github.com/tektoncd/results /home/prow/go/src/github.com/tektoncd/results Running integration test test/vendor/github.com/tektoncd/plumbing/scripts/e2e-tests.sh Running integration test test/e2e-tests.sh + E2E_GO_TEST_TIMEOUT=20m + main + failed=0 + echo 'Start e2e tests with a timeout of 20m' Start e2e tests with a timeout of 20m ++ dirname test/e2e-tests.sh + timeout 20m test/e2e/e2e.sh + trap cleanup EXIT + main + export KO_DOCKER_REPO=kind.local + KO_DOCKER_REPO=kind.local + export KIND_CLUSTER_NAME=tekton-results + KIND_CLUSTER_NAME=tekton-results + export SA_TOKEN_PATH=/tmp/tekton-results/tokens + SA_TOKEN_PATH=/tmp/tekton-results/tokens + export SSL_CERT_PATH=/tmp/tekton-results/ssl + SSL_CERT_PATH=/tmp/tekton-results/ssl ++ git rev-parse --show-toplevel + REPO=/home/prow/go/src/github.com/tektoncd/results + /home/prow/go/src/github.com/tektoncd/results/test/e2e/00-setup.sh Creating cluster "tekton-results" ... â€ĸ Ensuring node image (kindest/node:v1.32.2) đŸ–ŧ ... DEBUG: docker/images.go:67] Pulling image: kindest/node:v1.32.2@sha256:f226345927d7e348497136874b6d207e0b32cc52154ad8323129352923a3142f ... ✓ Ensuring node image (kindest/node:v1.32.2) đŸ–ŧ â€ĸ Preparing nodes đŸ“Ļ đŸ“Ļ ... ✓ Preparing nodes đŸ“Ļ đŸ“Ļ â€ĸ Writing configuration 📜 ... DEBUG: config/config.go:96] Using the following kubeadm config for node tekton-results-control-plane: apiServer: certSANs: - localhost - 127.0.0.1 extraArgs: runtime-config: "" apiVersion: kubeadm.k8s.io/v1beta3 clusterName: tekton-results controlPlaneEndpoint: tekton-results-control-plane:6443 controllerManager: extraArgs: enable-hostpath-provisioner: "true" kind: ClusterConfiguration kubernetesVersion: v1.32.2 networking: podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/16 scheduler: extraArgs: null --- apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - token: abcdef.0123456789abcdef kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.18.0.3 bindPort: 6443 nodeRegistration: criSocket: unix:///run/containerd/containerd.sock kubeletExtraArgs: node-ip: 172.18.0.3 node-labels: "" provider-id: kind://docker/tekton-results/tekton-results-control-plane --- apiVersion: kubeadm.k8s.io/v1beta3 controlPlane: localAPIEndpoint: advertiseAddress: 172.18.0.3 bindPort: 6443 discovery: bootstrapToken: apiServerEndpoint: tekton-results-control-plane:6443 token: abcdef.0123456789abcdef unsafeSkipCAVerification: true kind: JoinConfiguration nodeRegistration: criSocket: unix:///run/containerd/containerd.sock kubeletExtraArgs: node-ip: 172.18.0.3 node-labels: "" provider-id: kind://docker/tekton-results/tekton-results-control-plane --- apiVersion: kubelet.config.k8s.io/v1beta1 cgroupDriver: systemd cgroupRoot: /kubelet evictionHard: imagefs.available: 0% nodefs.available: 0% nodefs.inodesFree: 0% failSwapOn: false imageGCHighThresholdPercent: 100 kind: KubeletConfiguration --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 conntrack: maxPerCore: 0 iptables: minSyncPeriod: 1s kind: KubeProxyConfiguration mode: iptables DEBUG: config/config.go:96] Using the following kubeadm config for node tekton-results-worker: apiServer: certSANs: - localhost - 127.0.0.1 extraArgs: runtime-config: "" apiVersion: kubeadm.k8s.io/v1beta3 clusterName: tekton-results controlPlaneEndpoint: tekton-results-control-plane:6443 controllerManager: extraArgs: enable-hostpath-provisioner: "true" kind: ClusterConfiguration kubernetesVersion: v1.32.2 networking: podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/16 scheduler: extraArgs: null --- apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - token: abcdef.0123456789abcdef kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.18.0.2 bindPort: 6443 nodeRegistration: criSocket: unix:///run/containerd/containerd.sock kubeletExtraArgs: node-ip: 172.18.0.2 node-labels: "" provider-id: kind://docker/tekton-results/tekton-results-worker --- apiVersion: kubeadm.k8s.io/v1beta3 discovery: bootstrapToken: apiServerEndpoint: tekton-results-control-plane:6443 token: abcdef.0123456789abcdef unsafeSkipCAVerification: true kind: JoinConfiguration nodeRegistration: criSocket: unix:///run/containerd/containerd.sock kubeletExtraArgs: node-ip: 172.18.0.2 node-labels: "" provider-id: kind://docker/tekton-results/tekton-results-worker --- apiVersion: kubelet.config.k8s.io/v1beta1 cgroupDriver: systemd cgroupRoot: /kubelet evictionHard: imagefs.available: 0% nodefs.available: 0% nodefs.inodesFree: 0% failSwapOn: false imageGCHighThresholdPercent: 100 kind: KubeletConfiguration --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 conntrack: maxPerCore: 0 iptables: minSyncPeriod: 1s kind: KubeProxyConfiguration mode: iptables ✓ Writing configuration 📜 â€ĸ Starting control-plane đŸ•šī¸ ... DEBUG: kubeadminit/init.go:82] I0625 11:22:43.240388 201 initconfiguration.go:261] loading configuration from "/kind/kubeadm.conf" W0625 11:22:43.241673 201 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version. W0625 11:22:43.242515 201 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version. W0625 11:22:43.243253 201 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "JoinConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version. W0625 11:22:43.243745 201 initconfiguration.go:361] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration [init] Using Kubernetes version: v1.32.2 [certs] Using certificateDir folder "/etc/kubernetes/pki" I0625 11:22:43.245565 201 certs.go:112] creating a new certificate authority for ca [certs] Generating "ca" certificate and key I0625 11:22:43.484281 201 certs.go:473] validating certificate period for ca certificate [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost tekton-results-control-plane] and IPs [10.96.0.1 172.18.0.3 127.0.0.1] [certs] Generating "apiserver-kubelet-client" certificate and key I0625 11:22:43.661904 201 certs.go:112] creating a new certificate authority for front-proxy-ca [certs] Generating "front-proxy-ca" certificate and key I0625 11:22:43.727609 201 certs.go:473] validating certificate period for front-proxy-ca certificate [certs] Generating "front-proxy-client" certificate and key I0625 11:22:43.959812 201 certs.go:112] creating a new certificate authority for etcd-ca [certs] Generating "etcd/ca" certificate and key I0625 11:22:44.046333 201 certs.go:473] validating certificate period for etcd/ca certificate [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost tekton-results-control-plane] and IPs [172.18.0.3 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost tekton-results-control-plane] and IPs [172.18.0.3 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key I0625 11:22:44.389147 201 certs.go:78] creating new public/private key files for signing service account users [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0625 11:22:44.464860 201 kubeconfig.go:111] creating kubeconfig file for admin.conf [kubeconfig] Writing "admin.conf" kubeconfig file I0625 11:22:44.722239 201 kubeconfig.go:111] creating kubeconfig file for super-admin.conf [kubeconfig] Writing "super-admin.conf" kubeconfig file I0625 11:22:44.804099 201 kubeconfig.go:111] creating kubeconfig file for kubelet.conf [kubeconfig] Writing "kubelet.conf" kubeconfig file I0625 11:22:44.987211 201 kubeconfig.go:111] creating kubeconfig file for controller-manager.conf [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0625 11:22:45.394643 201 kubeconfig.go:111] creating kubeconfig file for scheduler.conf [kubeconfig] Writing "scheduler.conf" kubeconfig file [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" I0625 11:22:45.492467 201 local.go:66] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml" I0625 11:22:45.492502 201 manifests.go:104] [control-plane] getting StaticPodSpecs I0625 11:22:45.492727 201 certs.go:473] validating certificate period for CA certificate I0625 11:22:45.492804 201 manifests.go:130] [control-plane] adding volume "ca-certs" for component "kube-apiserver" I0625 11:22:45.492817 201 manifests.go:130] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver" I0625 11:22:45.492826 201 manifests.go:130] [control-plane] adding volume "k8s-certs" for component "kube-apiserver" I0625 11:22:45.492831 201 manifests.go:130] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver" I0625 11:22:45.492837 201 manifests.go:130] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver" I0625 11:22:45.493604 201 manifests.go:159] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml" I0625 11:22:45.493622 201 manifests.go:104] [control-plane] getting StaticPodSpecs [control-plane] Creating static Pod manifest for "kube-controller-manager" I0625 11:22:45.493818 201 manifests.go:130] [control-plane] adding volume "ca-certs" for component "kube-controller-manager" I0625 11:22:45.493830 201 manifests.go:130] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager" I0625 11:22:45.493834 201 manifests.go:130] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager" I0625 11:22:45.493838 201 manifests.go:130] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager" I0625 11:22:45.493842 201 manifests.go:130] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager" I0625 11:22:45.493848 201 manifests.go:130] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager" I0625 11:22:45.493854 201 manifests.go:130] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager" I0625 11:22:45.494522 201 manifests.go:159] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml" I0625 11:22:45.494538 201 manifests.go:104] [control-plane] getting StaticPodSpecs [control-plane] Creating static Pod manifest for "kube-scheduler" I0625 11:22:45.494734 201 manifests.go:130] [control-plane] adding volume "kubeconfig" for component "kube-scheduler" I0625 11:22:45.495307 201 manifests.go:159] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml" I0625 11:22:45.495323 201 kubelet.go:70] Stopping the kubelet [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet I0625 11:22:45.656288 201 loader.go:402] Config loaded from file: /etc/kubernetes/admin.conf I0625 11:22:45.656707 201 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false I0625 11:22:45.656731 201 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0625 11:22:45.656740 201 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0625 11:22:45.656750 201 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests" [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s [kubelet-check] The kubelet is healthy after 503.151603ms [api-check] Waiting for a healthy API server. This can take up to 4m0s I0625 11:22:46.163661 201 round_trippers.go:560] GET https://tekton-results-control-plane:6443/healthz?timeout=10s in 2 milliseconds I0625 11:22:46.661434 201 round_trippers.go:560] GET https://tekton-results-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0625 11:22:47.163743 201 round_trippers.go:560] GET https://tekton-results-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0625 11:22:47.661118 201 round_trippers.go:560] GET https://tekton-results-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0625 11:22:50.392306 201 round_trippers.go:560] GET https://tekton-results-control-plane:6443/healthz?timeout=10s 403 Forbidden in 2231 milliseconds I0625 11:22:50.396259 201 round_trippers.go:560] GET https://tekton-results-control-plane:6443/healthz?timeout=10s 403 Forbidden in 3 milliseconds I0625 11:22:50.661798 201 round_trippers.go:560] GET https://tekton-results-control-plane:6443/healthz?timeout=10s 403 Forbidden in 0 milliseconds I0625 11:22:51.161458 201 round_trippers.go:560] GET https://tekton-results-control-plane:6443/healthz?timeout=10s 403 Forbidden in 0 milliseconds I0625 11:22:51.662117 201 round_trippers.go:560] GET https://tekton-results-control-plane:6443/healthz?timeout=10s 403 Forbidden in 0 milliseconds I0625 11:22:52.161555 201 round_trippers.go:560] GET https://tekton-results-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 0 milliseconds I0625 11:22:52.663358 201 round_trippers.go:560] GET https://tekton-results-control-plane:6443/healthz?timeout=10s 200 OK in 2 milliseconds [api-check] The API server is healthy after 6.50305959s I0625 11:22:52.663966 201 loader.go:402] Config loaded from file: /etc/kubernetes/admin.conf I0625 11:22:52.664750 201 loader.go:402] Config loaded from file: /etc/kubernetes/super-admin.conf I0625 11:22:52.665264 201 kubeconfig.go:665] ensuring that the ClusterRoleBinding for the kubeadm:cluster-admins Group exists I0625 11:22:52.666430 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 403 Forbidden in 0 milliseconds I0625 11:22:52.666555 201 kubeconfig.go:738] creating the ClusterRoleBinding for the kubeadm:cluster-admins Group by using super-admin.conf I0625 11:22:52.677520 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 10 milliseconds I0625 11:22:52.677630 201 uploadconfig.go:112] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0625 11:22:52.684176 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 5 milliseconds I0625 11:22:52.688854 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 4 milliseconds I0625 11:22:52.696665 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 7 milliseconds [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster I0625 11:22:52.696751 201 uploadconfig.go:126] [upload-config] Uploading the kubelet component config to a ConfigMap I0625 11:22:52.701677 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 4 milliseconds I0625 11:22:52.707655 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 5 milliseconds I0625 11:22:52.714258 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 6 milliseconds I0625 11:22:52.714327 201 uploadconfig.go:132] [upload-config] Preserving the CRISocket information for the control-plane node I0625 11:22:52.714340 201 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///run/containerd/containerd.sock" to the Node API object "tekton-results-control-plane" as an annotation I0625 11:22:52.715942 201 round_trippers.go:560] GET https://tekton-results-control-plane:6443/api/v1/nodes/tekton-results-control-plane?timeout=10s 200 OK in 1 milliseconds I0625 11:22:52.727001 201 round_trippers.go:560] PATCH https://tekton-results-control-plane:6443/api/v1/nodes/tekton-results-control-plane?timeout=10s 200 OK in 10 milliseconds [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node tekton-results-control-plane as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node tekton-results-control-plane as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] I0625 11:22:52.728576 201 round_trippers.go:560] GET https://tekton-results-control-plane:6443/api/v1/nodes/tekton-results-control-plane?timeout=10s 200 OK in 1 milliseconds I0625 11:22:52.734154 201 round_trippers.go:560] PATCH https://tekton-results-control-plane:6443/api/v1/nodes/tekton-results-control-plane?timeout=10s 200 OK in 4 milliseconds [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0625 11:22:52.735740 201 round_trippers.go:560] GET https://tekton-results-control-plane:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef?timeout=10s 404 Not Found in 1 milliseconds I0625 11:22:52.740515 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/api/v1/namespaces/kube-system/secrets?timeout=10s 201 Created in 4 milliseconds [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes I0625 11:22:52.747213 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 6 milliseconds I0625 11:22:52.753344 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 5 milliseconds [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0625 11:22:52.759224 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 5 milliseconds [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0625 11:22:52.766155 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 6 milliseconds [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0625 11:22:52.772649 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 6 milliseconds I0625 11:22:52.772737 201 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0625 11:22:52.773283 201 loader.go:402] Config loaded from file: /etc/kubernetes/admin.conf I0625 11:22:52.773310 201 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig I0625 11:22:52.773551 201 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace I0625 11:22:52.779862 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/api/v1/namespaces/kube-public/configmaps?timeout=10s 201 Created in 6 milliseconds I0625 11:22:52.779958 201 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace I0625 11:22:52.866251 201 request.go:661] Waited for 86.217501ms due to client-side throttling, not priority and fairness, request: POST:https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles?timeout=10s I0625 11:22:52.869872 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles?timeout=10s 201 Created in 3 milliseconds I0625 11:22:53.066223 201 request.go:661] Waited for 196.208599ms due to client-side throttling, not priority and fairness, request: POST:https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings?timeout=10s I0625 11:22:53.068718 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings?timeout=10s 201 Created in 2 milliseconds I0625 11:22:53.068882 201 kubeletfinalize.go:91] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem" [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0625 11:22:53.069517 201 loader.go:402] Config loaded from file: /etc/kubernetes/kubelet.conf I0625 11:22:53.070052 201 kubeletfinalize.go:145] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation I0625 11:22:53.268978 201 round_trippers.go:560] GET https://tekton-results-control-plane:6443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns 200 OK in 2 milliseconds I0625 11:22:53.271498 201 round_trippers.go:560] GET https://tekton-results-control-plane:6443/api/v1/namespaces/kube-system/configmaps/coredns?timeout=10s 404 Not Found in 1 milliseconds I0625 11:22:53.274324 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 2 milliseconds I0625 11:22:53.278401 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 3 milliseconds I0625 11:22:53.466238 201 request.go:661] Waited for 187.343921ms due to client-side throttling, not priority and fairness, request: POST:https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s I0625 11:22:53.468652 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 milliseconds I0625 11:22:53.471220 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 2 milliseconds I0625 11:22:53.478237 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/apps/v1/namespaces/kube-system/deployments?timeout=10s 201 Created in 3 milliseconds I0625 11:22:53.489708 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/api/v1/namespaces/kube-system/services?timeout=10s 201 Created in 10 milliseconds [addons] Applied essential addon: CoreDNS I0625 11:22:53.494498 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 3 milliseconds I0625 11:22:53.498668 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/apps/v1/namespaces/kube-system/daemonsets?timeout=10s 201 Created in 3 milliseconds I0625 11:22:53.679155 201 request.go:661] Waited for 180.27869ms due to client-side throttling, not priority and fairness, request: POST:https://tekton-results-control-plane:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s I0625 11:22:53.682024 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 2 milliseconds I0625 11:22:53.684601 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 milliseconds I0625 11:22:53.866043 201 request.go:661] Waited for 181.246441ms due to client-side throttling, not priority and fairness, request: POST:https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s I0625 11:22:53.868589 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 2 milliseconds I0625 11:22:54.066138 201 request.go:661] Waited for 197.341755ms due to client-side throttling, not priority and fairness, request: POST:https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s I0625 11:22:54.068888 201 round_trippers.go:560] POST https://tekton-results-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 2 milliseconds [addons] Applied essential addon: kube-proxy I0625 11:22:54.069580 201 loader.go:402] Config loaded from file: /etc/kubernetes/admin.conf I0625 11:22:54.070118 201 loader.go:402] Config loaded from file: /etc/kubernetes/admin.conf Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join tekton-results-control-plane:6443 --token \ --discovery-token-ca-cert-hash sha256:b1aa35f75ca54678eb6f00c96d56cbded1ace8a3303059c1ed26e1b918d1ac85 \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join tekton-results-control-plane:6443 --token \ --discovery-token-ca-cert-hash sha256:b1aa35f75ca54678eb6f00c96d56cbded1ace8a3303059c1ed26e1b918d1ac85 ✓ Starting control-plane đŸ•šī¸ â€ĸ Installing CNI 🔌 ... ✓ Installing CNI 🔌 â€ĸ Installing StorageClass 💾 ... ✓ Installing StorageClass 💾 â€ĸ Joining worker nodes 🚜 ... DEBUG: kubeadmjoin/join.go:133] I0625 11:22:55.112824 229 join.go:421] [preflight] found NodeName empty; using OS hostname as NodeName I0625 11:22:55.112883 229 joinconfiguration.go:83] loading configuration from "/kind/kubeadm.conf" W0625 11:22:55.113447 229 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "JoinConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version. I0625 11:22:55.114061 229 controlplaneprepare.go:225] [download-certs] Skipping certs download I0625 11:22:55.114102 229 join.go:551] [preflight] Discovering cluster-info I0625 11:22:55.114135 229 token.go:72] [discovery] Created cluster-info discovery client, requesting info from "tekton-results-control-plane:6443" I0625 11:22:55.114362 229 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false I0625 11:22:55.114383 229 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false I0625 11:22:55.114389 229 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0625 11:22:55.114409 229 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0625 11:22:55.114794 229 token.go:230] [discovery] Waiting for the cluster-info ConfigMap to receive a JWS signature for token ID "abcdef" I0625 11:22:55.139383 229 round_trippers.go:560] GET https://tekton-results-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 24 milliseconds I0625 11:22:55.139597 229 token.go:250] [discovery] Retrying due to error: could not find a JWS signature in the cluster-info ConfigMap for token ID "abcdef" I0625 11:23:00.119182 229 token.go:230] [discovery] Waiting for the cluster-info ConfigMap to receive a JWS signature for token ID "abcdef" I0625 11:23:00.151303 229 round_trippers.go:560] GET https://tekton-results-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 31 milliseconds I0625 11:23:00.152055 229 token.go:114] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "tekton-results-control-plane:6443" I0625 11:23:00.152129 229 discovery.go:53] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process I0625 11:23:00.152148 229 join.go:565] [preflight] Fetching init configuration I0625 11:23:00.152155 229 join.go:652] [preflight] Retrieving KubeConfig objects [preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"... [preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it. I0625 11:23:00.184793 229 round_trippers.go:560] GET https://tekton-results-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s 200 OK in 32 milliseconds I0625 11:23:00.186105 229 kubeproxy.go:55] attempting to download the KubeProxyConfiguration from ConfigMap "kube-proxy" I0625 11:23:00.187852 229 round_trippers.go:560] GET https://tekton-results-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy?timeout=10s 200 OK in 1 milliseconds I0625 11:23:00.189830 229 kubelet.go:74] attempting to download the KubeletConfiguration from ConfigMap "kubelet-config" I0625 11:23:00.191631 229 round_trippers.go:560] GET https://tekton-results-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config?timeout=10s 200 OK in 1 milliseconds I0625 11:23:00.194253 229 initconfiguration.go:115] skip CRI socket detection, fill with the default CRI socket unix:///var/run/containerd/containerd.sock I0625 11:23:00.194520 229 interface.go:432] Looking for default routes with IPv4 addresses I0625 11:23:00.194534 229 interface.go:437] Default route transits interface "eth0" I0625 11:23:00.194624 229 interface.go:209] Interface eth0 is up I0625 11:23:00.194735 229 interface.go:257] Interface "eth0" has 3 addresses :[172.18.0.2/16 fc00:f853:ccd:e793::2/64 fe80::42:acff:fe12:2/64]. I0625 11:23:00.194814 229 interface.go:224] Checking addr 172.18.0.2/16. I0625 11:23:00.194886 229 interface.go:231] IP found 172.18.0.2 I0625 11:23:00.194986 229 interface.go:263] Found valid IPv4 address 172.18.0.2 for interface "eth0". I0625 11:23:00.195044 229 interface.go:443] Found active IP 172.18.0.2 I0625 11:23:00.195626 229 kubelet.go:183] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf I0625 11:23:00.196562 229 kubelet.go:198] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt I0625 11:23:00.196794 229 kubelet.go:214] [kubelet-start] Checking for an existing Node in the cluster with name "tekton-results-worker" and status "Ready" I0625 11:23:00.198430 229 round_trippers.go:560] GET https://tekton-results-control-plane:6443/api/v1/nodes/tekton-results-worker?timeout=10s 404 Not Found in 1 milliseconds I0625 11:23:00.198675 229 kubelet.go:229] [kubelet-start] Stopping the kubelet [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s [kubelet-check] The kubelet is healthy after 501.615412ms [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap I0625 11:23:00.927866 229 loader.go:402] Config loaded from file: /etc/kubernetes/kubelet.conf I0625 11:23:00.928545 229 cert_rotation.go:140] Starting client certificate rotation controller I0625 11:23:00.928909 229 loader.go:402] Config loaded from file: /etc/kubernetes/kubelet.conf I0625 11:23:00.929285 229 kubelet.go:337] [kubelet-start] preserving the crisocket information for the node I0625 11:23:00.929309 229 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///run/containerd/containerd.sock" to the Node API object "tekton-results-worker" as an annotation I0625 11:23:00.952590 229 round_trippers.go:560] GET https://tekton-results-control-plane:6443/api/v1/nodes/tekton-results-worker?timeout=10s 404 Not Found in 23 milliseconds I0625 11:23:01.433969 229 round_trippers.go:560] GET https://tekton-results-control-plane:6443/api/v1/nodes/tekton-results-worker?timeout=10s 200 OK in 3 milliseconds This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. I0625 11:23:01.440854 229 round_trippers.go:560] PATCH https://tekton-results-control-plane:6443/api/v1/nodes/tekton-results-worker?timeout=10s 200 OK in 5 milliseconds ✓ Joining worker nodes 🚜 â€ĸ Waiting ≤ 1m0s for control-plane = Ready âŗ ... ✓ Waiting ≤ 1m0s for control-plane = Ready âŗ â€ĸ Ready after 10s 💚 Set kubectl context to "kind-tekton-results" You can now use your cluster with: kubectl cluster-info --context kind-tekton-results Thanks for using kind! 😊 Set kubectl context to "kind-tekton-results" + /home/prow/go/src/github.com/tektoncd/results/test/e2e/01-install.sh Installing Tekton Pipelines... namespace/tekton-pipelines created clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-cluster-access created clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-tenant-access created clusterrole.rbac.authorization.k8s.io/tekton-pipelines-webhook-cluster-access created clusterrole.rbac.authorization.k8s.io/tekton-events-controller-cluster-access created role.rbac.authorization.k8s.io/tekton-pipelines-controller created role.rbac.authorization.k8s.io/tekton-pipelines-webhook created role.rbac.authorization.k8s.io/tekton-pipelines-events-controller created role.rbac.authorization.k8s.io/tekton-pipelines-leader-election created role.rbac.authorization.k8s.io/tekton-pipelines-info created serviceaccount/tekton-pipelines-controller created serviceaccount/tekton-pipelines-webhook created serviceaccount/tekton-events-controller created clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller-cluster-access created clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller-tenant-access created clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-webhook-cluster-access created clusterrolebinding.rbac.authorization.k8s.io/tekton-events-controller-cluster-access created rolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller created rolebinding.rbac.authorization.k8s.io/tekton-pipelines-webhook created rolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller-leaderelection created rolebinding.rbac.authorization.k8s.io/tekton-pipelines-webhook-leaderelection created rolebinding.rbac.authorization.k8s.io/tekton-pipelines-info created rolebinding.rbac.authorization.k8s.io/tekton-pipelines-events-controller created rolebinding.rbac.authorization.k8s.io/tekton-events-controller-leaderelection created customresourcedefinition.apiextensions.k8s.io/customruns.tekton.dev created customresourcedefinition.apiextensions.k8s.io/pipelines.tekton.dev created customresourcedefinition.apiextensions.k8s.io/pipelineruns.tekton.dev created customresourcedefinition.apiextensions.k8s.io/resolutionrequests.resolution.tekton.dev created customresourcedefinition.apiextensions.k8s.io/stepactions.tekton.dev created customresourcedefinition.apiextensions.k8s.io/tasks.tekton.dev created customresourcedefinition.apiextensions.k8s.io/taskruns.tekton.dev created customresourcedefinition.apiextensions.k8s.io/verificationpolicies.tekton.dev created secret/webhook-certs created validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.pipeline.tekton.dev created mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.pipeline.tekton.dev created validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.pipeline.tekton.dev created clusterrole.rbac.authorization.k8s.io/tekton-aggregate-edit created clusterrole.rbac.authorization.k8s.io/tekton-aggregate-view created configmap/config-defaults created configmap/config-events created configmap/feature-flags created configmap/pipelines-info created configmap/config-leader-election-controller created configmap/config-leader-election-events created configmap/config-leader-election-webhook created configmap/config-logging created configmap/config-observability created configmap/config-registry-cert created configmap/config-spire created configmap/config-tracing created deployment.apps/tekton-pipelines-controller created service/tekton-pipelines-controller created deployment.apps/tekton-events-controller created service/tekton-events-controller created namespace/tekton-pipelines-resolvers created clusterrole.rbac.authorization.k8s.io/tekton-pipelines-resolvers-resolution-request-updates created role.rbac.authorization.k8s.io/tekton-pipelines-resolvers-namespace-rbac created serviceaccount/tekton-pipelines-resolvers created clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-resolvers created rolebinding.rbac.authorization.k8s.io/tekton-pipelines-resolvers-namespace-rbac created configmap/bundleresolver-config created configmap/cluster-resolver-config created configmap/resolvers-feature-flags created configmap/config-leader-election-resolvers created configmap/config-logging created configmap/config-observability created configmap/git-resolver-config created configmap/http-resolver-config created configmap/hubresolver-config created deployment.apps/tekton-pipelines-remote-resolvers created service/tekton-pipelines-remote-resolvers created horizontalpodautoscaler.autoscaling/tekton-pipelines-webhook created deployment.apps/tekton-pipelines-webhook created service/tekton-pipelines-webhook created Generating DB secret... secret/tekton-results-postgres created Generating TLS key pair... ...+...+.......+..+...+....+....................+...+...+...+....+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*......+...+..+.+..+.........+.+........+..........+..+...+....+...........+.+.....+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*....+..+.+...........+.+...+..+..................+.+..............+....+...+..+...+.............+.........+...+...+......+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ....+...+.+...+...+............+..+....+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*.+...+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*................+...............+.......+.....+.+......+.....+...+...+...+......+.............+......+.........+..+...................+.........+......+.....+.......+..+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ----- secret/tekton-results-tls created Installing Tekton Results... 2025/06/25 11:23:19 Using base cgr.dev/chainguard/static:latest@sha256:092aad9f6448695b6e20333a8faa93fe3637bcf4e88aa804b8f01545eaf288bd for github.com/tektoncd/results/cmd/retention-policy-agent 2025/06/25 11:23:19 Using base cgr.dev/chainguard/static:latest@sha256:092aad9f6448695b6e20333a8faa93fe3637bcf4e88aa804b8f01545eaf288bd for github.com/tektoncd/results/cmd/api 2025/06/25 11:23:19 Using base cgr.dev/chainguard/static:latest@sha256:092aad9f6448695b6e20333a8faa93fe3637bcf4e88aa804b8f01545eaf288bd for github.com/tektoncd/results/cmd/watcher 2025/06/25 11:23:19 Building github.com/tektoncd/results/cmd/api for linux/amd64 2025/06/25 11:23:19 Building github.com/tektoncd/results/cmd/retention-policy-agent for linux/amd64 2025/06/25 11:23:19 Building github.com/tektoncd/results/cmd/watcher for linux/amd64 2025/06/25 11:28:08 Loading kind.local/retention-policy-agent-07427b345034d96a9a27896ebb138518:2bdc3ce3b47d7d2ed95ae3bcdb6896d9b1c93f4352205620f4d1448a82dfcad9 2025/06/25 11:28:12 Loading kind.local/api-b1b7ffa9ba32f7c3020c3b68830b30a8:dce61e7b934b41b9d838db126dc7f99de07a034d0cf5415a3d71d11e7b0df390 2025/06/25 11:28:14 Loaded kind.local/retention-policy-agent-07427b345034d96a9a27896ebb138518:2bdc3ce3b47d7d2ed95ae3bcdb6896d9b1c93f4352205620f4d1448a82dfcad9 2025/06/25 11:28:14 Adding tag latest 2025/06/25 11:28:14 Added tag latest 2025/06/25 11:28:17 Loading kind.local/watcher-83f971ea227fb24157c0c699b824a628:8df010a2c598f6c793ccbc8263d39eaba29bdc95ccdd6d6d24f861d72a0e1ccf 2025/06/25 11:28:18 Loaded kind.local/api-b1b7ffa9ba32f7c3020c3b68830b30a8:dce61e7b934b41b9d838db126dc7f99de07a034d0cf5415a3d71d11e7b0df390 2025/06/25 11:28:18 Adding tag latest 2025/06/25 11:28:18 Added tag latest 2025/06/25 11:28:22 Loaded kind.local/watcher-83f971ea227fb24157c0c699b824a628:8df010a2c598f6c793ccbc8263d39eaba29bdc95ccdd6d6d24f861d72a0e1ccf 2025/06/25 11:28:22 Adding tag latest 2025/06/25 11:28:22 Added tag latest serviceaccount/all-namespaces-admin-access created serviceaccount/all-namespaces-impersonate-access created serviceaccount/all-namespaces-read-access created serviceaccount/single-namespace-read-access created serviceaccount/tekton-results-api created serviceaccount/tekton-results-watcher created role.rbac.authorization.k8s.io/tekton-results-info created clusterrole.rbac.authorization.k8s.io/tekton-results-admin created clusterrole.rbac.authorization.k8s.io/tekton-results-api created clusterrole.rbac.authorization.k8s.io/tekton-results-impersonate created clusterrole.rbac.authorization.k8s.io/tekton-results-readonly created clusterrole.rbac.authorization.k8s.io/tekton-results-readwrite created clusterrole.rbac.authorization.k8s.io/tekton-results-watcher created rolebinding.rbac.authorization.k8s.io/single-namespace-read-access created rolebinding.rbac.authorization.k8s.io/tekton-results-info created clusterrolebinding.rbac.authorization.k8s.io/all-namespaces-admin-access created clusterrolebinding.rbac.authorization.k8s.io/all-namespaces-impersonate-access created clusterrolebinding.rbac.authorization.k8s.io/all-namespaces-read-access created clusterrolebinding.rbac.authorization.k8s.io/tekton-results-api created clusterrolebinding.rbac.authorization.k8s.io/tekton-results-watcher created configmap/tekton-results-api-config created configmap/tekton-results-config-leader-election created configmap/tekton-results-config-logging created configmap/tekton-results-config-observability created configmap/tekton-results-config-results-retention-policy created configmap/tekton-results-info created configmap/tekton-results-postgres created service/tekton-results-api-service created service/tekton-results-postgres-service created service/tekton-results-watcher created deployment.apps/tekton-results-api created deployment.apps/tekton-results-retention-policy-agent created deployment.apps/tekton-results-watcher created statefulset.apps/tekton-results-postgres created Fetching access tokens... Created /tmp/tekton-results/tokens/all-namespaces-read-access Created /tmp/tekton-results/tokens/single-namespace-read-access Created /tmp/tekton-results/tokens/all-namespaces-admin-access Created /tmp/tekton-results/tokens/all-namespaces-impersonate-access Waiting for deployments to be ready... pod/tekton-results-postgres-0 condition met deployment.apps/tekton-results-api condition met deployment.apps/tekton-results-watcher condition met + export CGO_ENABLED=0 + CGO_ENABLED=0 ++ go list --tags=e2e /home/prow/go/src/github.com/tektoncd/results/test/e2e/... ++ grep -v /client + go test -v -count=1 --tags=e2e github.com/tektoncd/results/test/e2e === RUN TestTaskRun === RUN TestTaskRun/check_annotations === RUN TestTaskRun/check_deletion === RUN TestTaskRun/check_result === RUN TestTaskRun/check_record === RUN TestTaskRun/check_event_record --- PASS: TestTaskRun (29.05s) --- PASS: TestTaskRun/check_annotations (15.01s) --- PASS: TestTaskRun/check_deletion (13.01s) --- PASS: TestTaskRun/check_result (0.18s) --- PASS: TestTaskRun/check_record (0.40s) --- PASS: TestTaskRun/check_event_record (0.40s) === RUN TestPipelineRun === RUN TestPipelineRun/check_annotations === RUN TestPipelineRun/check_deletion === RUN TestPipelineRun/check_result === RUN TestPipelineRun/check_record === RUN TestPipelineRun/check_event_record === RUN TestPipelineRun/result_data_consistency === RUN TestPipelineRun/result_data_consistency/Result_and_RecordSummary_Annotations_were_set_accordingly === RUN TestPipelineRun/result_data_consistency/the_PipelineRun_was_archived_in_its_final_state --- PASS: TestPipelineRun (27.80s) --- PASS: TestPipelineRun/check_annotations (16.01s) --- PASS: TestPipelineRun/check_deletion (8.01s) --- PASS: TestPipelineRun/check_result (0.75s) --- PASS: TestPipelineRun/check_record (0.80s) --- PASS: TestPipelineRun/check_event_record (0.80s) --- PASS: TestPipelineRun/result_data_consistency (1.40s) --- PASS: TestPipelineRun/result_data_consistency/Result_and_RecordSummary_Annotations_were_set_accordingly (0.00s) --- PASS: TestPipelineRun/result_data_consistency/the_PipelineRun_was_archived_in_its_final_state (0.60s) === RUN TestGRPCLogging === RUN TestGRPCLogging/log_entry_is_found_when_not_expected === RUN TestGRPCLogging/log_entry_is_found_when_expected --- PASS: TestGRPCLogging (0.41s) --- PASS: TestGRPCLogging/log_entry_is_found_when_not_expected (0.02s) --- PASS: TestGRPCLogging/log_entry_is_found_when_expected (0.38s) === RUN TestListResults === RUN TestListResults/list_results_under_the_default_parent === RUN TestListResults/list_results_across_parents === RUN TestListResults/return_an_error_because_the_identity_isn't_authorized_to_access_all_namespaces === RUN TestListResults/list_results_under_the_default_parent_using_the_identity_with_more_limited_access === RUN TestListResults/grpc_and_rest_consistency --- PASS: TestListResults (2.79s) --- PASS: TestListResults/list_results_under_the_default_parent (0.39s) --- PASS: TestListResults/list_results_across_parents (0.80s) --- PASS: TestListResults/return_an_error_because_the_identity_isn't_authorized_to_access_all_namespaces (0.40s) --- PASS: TestListResults/list_results_under_the_default_parent_using_the_identity_with_more_limited_access (0.40s) --- PASS: TestListResults/grpc_and_rest_consistency (0.80s) === RUN TestListRecords === RUN TestListRecords/list_records_by_omitting_the_result_name === RUN TestListRecords/list_records_by_omitting_the_parent_and_result_names === RUN TestListRecords/return_an_error_because_the_identity_isn't_authorized_to_access_all_namespaces === RUN TestListRecords/list_records_using_the_identity_with_more_limited_access === RUN TestListRecords/grpc_and_rest_consistency --- PASS: TestListRecords (2.95s) --- PASS: TestListRecords/list_records_by_omitting_the_result_name (0.40s) --- PASS: TestListRecords/list_records_by_omitting_the_parent_and_result_names (0.80s) --- PASS: TestListRecords/return_an_error_because_the_identity_isn't_authorized_to_access_all_namespaces (0.40s) --- PASS: TestListRecords/list_records_using_the_identity_with_more_limited_access (0.40s) --- PASS: TestListRecords/grpc_and_rest_consistency (0.95s) === RUN TestGetResult === RUN TestGetResult/get_result === RUN TestGetResult/get_result/grpc === RUN TestGetResult/get_result/rest === RUN TestGetResult/grpc_and_rest_consistency --- PASS: TestGetResult (1.05s) --- PASS: TestGetResult/get_result (0.80s) --- PASS: TestGetResult/get_result/grpc (0.40s) --- PASS: TestGetResult/get_result/rest (0.40s) --- PASS: TestGetResult/grpc_and_rest_consistency (0.00s) === RUN TestGetRecord === RUN TestGetRecord/get_record === RUN TestGetRecord/get_record/grpc === RUN TestGetRecord/get_record/rest === RUN TestGetRecord/grpc_and_rest_consistency --- PASS: TestGetRecord (1.21s) --- PASS: TestGetRecord/get_record (0.80s) --- PASS: TestGetRecord/get_record/grpc (0.40s) --- PASS: TestGetRecord/get_record/rest (0.40s) --- PASS: TestGetRecord/grpc_and_rest_consistency (0.01s) === RUN TestDeleteRecord === RUN TestDeleteRecord/delete_record === RUN TestDeleteRecord/delete_record/grpc === RUN TestDeleteRecord/delete_record/rest --- PASS: TestDeleteRecord (1.99s) --- PASS: TestDeleteRecord/delete_record (1.60s) --- PASS: TestDeleteRecord/delete_record/grpc (0.80s) --- PASS: TestDeleteRecord/delete_record/rest (0.80s) === RUN TestDeleteResult === RUN TestDeleteResult/delete_result === RUN TestDeleteResult/delete_result/grpc === RUN TestDeleteResult/delete_result/rest --- PASS: TestDeleteResult (2.00s) --- PASS: TestDeleteResult/delete_result (1.60s) --- PASS: TestDeleteResult/delete_result/grpc (0.80s) --- PASS: TestDeleteResult/delete_result/rest (0.80s) === RUN TestAuthentication === RUN TestAuthentication/valid_token === RUN TestAuthentication/valid_token/grpc === RUN TestAuthentication/valid_token/rest === RUN TestAuthentication/invalid_token === RUN TestAuthentication/invalid_token/grpc === RUN TestAuthentication/invalid_token/rest --- PASS: TestAuthentication (1.20s) --- PASS: TestAuthentication/valid_token (0.80s) --- PASS: TestAuthentication/valid_token/grpc (0.39s) --- PASS: TestAuthentication/valid_token/rest (0.40s) --- PASS: TestAuthentication/invalid_token (0.40s) --- PASS: TestAuthentication/invalid_token/grpc (0.19s) --- PASS: TestAuthentication/invalid_token/rest (0.20s) === RUN TestAuthorization === RUN TestAuthorization/unauthorized_token === RUN TestAuthorization/unauthorized_token/grpc === RUN TestAuthorization/unauthorized_token/rest --- PASS: TestAuthorization (0.80s) --- PASS: TestAuthorization/unauthorized_token (0.79s) --- PASS: TestAuthorization/unauthorized_token/grpc (0.38s) --- PASS: TestAuthorization/unauthorized_token/rest (0.40s) === RUN TestImpersonation === RUN TestImpersonation/impersonate_with_user_not_having_permission === RUN TestImpersonation/impersonate_with_user_not_having_permission/grpc === RUN TestImpersonation/impersonate_with_user_not_having_permission/rest === RUN TestImpersonation/impersonate_with_user_having_permission === RUN TestImpersonation/impersonate_with_user_having_permission/grpc === RUN TestImpersonation/impersonate_with_user_having_permission/rest --- PASS: TestImpersonation (2.40s) --- PASS: TestImpersonation/impersonate_with_user_not_having_permission (1.20s) --- PASS: TestImpersonation/impersonate_with_user_not_having_permission/grpc (0.59s) --- PASS: TestImpersonation/impersonate_with_user_not_having_permission/rest (0.60s) --- PASS: TestImpersonation/impersonate_with_user_having_permission (1.20s) --- PASS: TestImpersonation/impersonate_with_user_having_permission/grpc (0.59s) --- PASS: TestImpersonation/impersonate_with_user_having_permission/rest (0.60s) PASS ok github.com/tektoncd/results/test/e2e 73.667s + kubectl apply -f /home/prow/go/src/github.com/tektoncd/results/test/e2e/gcs-emulator.yaml deployment.apps/gcs-emulator created service/gcs-emulator created configmap/tekton-results-api-config configured ++ kubectl get pod -o=name -n tekton-pipelines ++ grep tekton-results-api ++ sed 's/^.\{4\}//' + kubectl delete pod tekton-results-api-9845796f7-b4mqx -n tekton-pipelines pod "tekton-results-api-9845796f7-b4mqx" deleted + kubectl wait deployment tekton-results-api --namespace=tekton-pipelines --for=condition=available --timeout=120s error: timed out waiting for the condition on deployments/tekton-results-api + cleanup + kind delete cluster Deleting cluster "tekton-results" ... Deleted nodes: ["tekton-results-worker" "tekton-results-control-plane"] + failed=1 + return 1 ================================== ==== INTEGRATION TESTS FAILED ==== ================================== + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up binfmt_misc ... ================================================================================ Done cleaning up after docker in docker.