Docker in Docker enabled, initializing... ================================================================================ net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.all.forwarding = 1 Starting Docker: docker. Waiting for docker to be ready, sleeping for 1 seconds. ================================================================================ Done setting up docker in docker. + WRAPPED_COMMAND_PID=234 + wait 234 + ./test_e2e.sh Building kubebuilder go: downloading github.com/spf13/afero v1.14.0 go: downloading github.com/sirupsen/logrus v1.9.3 go: downloading golang.org/x/tools v0.33.0 go: downloading github.com/spf13/pflag v1.0.6 go: downloading sigs.k8s.io/yaml v1.4.0 go: downloading golang.org/x/text v0.25.0 go: downloading github.com/spf13/cobra v1.9.1 go: downloading github.com/gobuffalo/flect v1.0.3 go: downloading golang.org/x/sys v0.33.0 go: downloading golang.org/x/sync v0.14.0 go: downloading golang.org/x/mod v0.24.0 Installing setup-envtest to /home/prow/go/bin go: downloading sigs.k8s.io/controller-runtime v0.20.5-0.20250517180713-32e5e9e948a5 go: downloading sigs.k8s.io/controller-runtime/tools/setup-envtest v0.0.0-20250517180713-32e5e9e948a5 go: downloading github.com/go-logr/zapr v1.3.0 go: downloading github.com/spf13/afero v1.12.0 go: downloading go.uber.org/zap v1.27.0 go: downloading github.com/go-logr/logr v1.4.2 go: downloading golang.org/x/text v0.21.0 go: downloading go.uber.org/multierr v1.10.0 Installing e2e tools with setup-envtest Version: 1.33.0 OS/Arch: linux/amd64 sha512: 2cb7f5468ed7cea1492f971b715bcc27069e824cf7d5927b7f127f1e8c75cf086eea050543cdb5f79faee0a2bf775f160adf27443aa7ee845d962d04e9d43ac9 Path: /root/.local/share/kubebuilder-envtest/k8s/1.33.0-linux-amd64 Installing kind to /home/prow/go/bin go: downloading sigs.k8s.io/kind v0.29.0 go: downloading github.com/spf13/pflag v1.0.5 go: downloading github.com/mattn/go-isatty v0.0.20 go: downloading al.essio.dev/pkg/shellescape v1.5.1 go: downloading github.com/spf13/cobra v1.8.0 go: downloading github.com/pkg/errors v0.9.1 go: downloading github.com/pelletier/go-toml v1.9.5 go: downloading github.com/BurntSushi/toml v1.4.0 go: downloading github.com/evanphx/json-patch/v5 v5.6.0 go: downloading golang.org/x/sys v0.6.0 Getting kind config... No kind clusters found. Creating cluster... Creating cluster "kind" ... • Ensuring node image (kindest/node:v1.33.0) 🖼 ... DEBUG: docker/images.go:67] Pulling image: kindest/node:v1.33.0 ... ✓ Ensuring node image (kindest/node:v1.33.0) 🖼 • Preparing nodes 📦 ... ✓ Preparing nodes 📦 • Writing configuration 📜 ... DEBUG: config/config.go:96] Using the following kubeadm config for node kind-control-plane: apiServer: certSANs: - localhost - 127.0.0.1 extraArgs: runtime-config: "" apiVersion: kubeadm.k8s.io/v1beta3 clusterName: kind controlPlaneEndpoint: kind-control-plane:6443 controllerManager: extraArgs: enable-hostpath-provisioner: "true" kind: ClusterConfiguration kubernetesVersion: v1.33.0 networking: podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/16 scheduler: extraArgs: null --- apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - token: abcdef.0123456789abcdef kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.18.0.2 bindPort: 6443 nodeRegistration: criSocket: unix:///run/containerd/containerd.sock kubeletExtraArgs: node-ip: 172.18.0.2 node-labels: "" provider-id: kind://docker/kind/kind-control-plane skipPhases: - preflight --- apiVersion: kubeadm.k8s.io/v1beta3 controlPlane: localAPIEndpoint: advertiseAddress: 172.18.0.2 bindPort: 6443 discovery: bootstrapToken: apiServerEndpoint: kind-control-plane:6443 token: abcdef.0123456789abcdef unsafeSkipCAVerification: true kind: JoinConfiguration nodeRegistration: criSocket: unix:///run/containerd/containerd.sock kubeletExtraArgs: node-ip: 172.18.0.2 node-labels: "" provider-id: kind://docker/kind/kind-control-plane skipPhases: - preflight --- apiVersion: kubelet.config.k8s.io/v1beta1 cgroupDriver: systemd cgroupRoot: /kubelet evictionHard: imagefs.available: 0% nodefs.available: 0% nodefs.inodesFree: 0% failSwapOn: false imageGCHighThresholdPercent: 100 kind: KubeletConfiguration --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 conntrack: maxPerCore: 0 iptables: minSyncPeriod: 1s kind: KubeProxyConfiguration mode: iptables ✓ Writing configuration 📜 • Starting control-plane 🕹️ ... DEBUG: kubeadminit/init.go:96] I0602 07:54:06.820046 182 initconfiguration.go:261] loading configuration from "/kind/kubeadm.conf" W0602 07:54:06.820826 182 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old-config-file --new-config new-config-file', which will write the new, similar spec using a newer API version. W0602 07:54:06.821517 182 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old-config-file --new-config new-config-file', which will write the new, similar spec using a newer API version. W0602 07:54:06.822010 182 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "JoinConfiguration"). Please use 'kubeadm config migrate --old-config old-config-file --new-config new-config-file', which will write the new, similar spec using a newer API version. W0602 07:54:06.822360 182 initconfiguration.go:362] [config] WARNING: Ignored configuration document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration [init] Using Kubernetes version: v1.33.0 [certs] Using certificateDir folder "/etc/kubernetes/pki" I0602 07:54:06.824217 182 certs.go:112] creating a new certificate authority for ca [certs] Generating "ca" certificate and key I0602 07:54:07.053920 182 certs.go:473] validating certificate period for ca certificate [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.18.0.2 127.0.0.1] [certs] Generating "apiserver-kubelet-client" certificate and key I0602 07:54:07.585426 182 certs.go:112] creating a new certificate authority for front-proxy-ca [certs] Generating "front-proxy-ca" certificate and key I0602 07:54:07.693049 182 certs.go:473] validating certificate period for front-proxy-ca certificate [certs] Generating "front-proxy-client" certificate and key I0602 07:54:08.016324 182 certs.go:112] creating a new certificate authority for etcd-ca [certs] Generating "etcd/ca" certificate and key I0602 07:54:08.654955 182 certs.go:473] validating certificate period for etcd/ca certificate [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key I0602 07:54:09.829144 182 certs.go:78] creating new public/private key files for signing service account users [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0602 07:54:10.450440 182 kubeconfig.go:111] creating kubeconfig file for admin.conf [kubeconfig] Writing "admin.conf" kubeconfig file I0602 07:54:10.653497 182 kubeconfig.go:111] creating kubeconfig file for super-admin.conf [kubeconfig] Writing "super-admin.conf" kubeconfig file I0602 07:54:11.045306 182 kubeconfig.go:111] creating kubeconfig file for kubelet.conf [kubeconfig] Writing "kubelet.conf" kubeconfig file I0602 07:54:11.375175 182 kubeconfig.go:111] creating kubeconfig file for controller-manager.conf [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0602 07:54:11.754023 182 kubeconfig.go:111] creating kubeconfig file for scheduler.conf [kubeconfig] Writing "scheduler.conf" kubeconfig file [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0602 07:54:12.239858 182 local.go:66] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml" I0602 07:54:12.239938 182 manifests.go:104] [control-plane] getting StaticPodSpecs [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" I0602 07:54:12.240272 182 certs.go:473] validating certificate period for CA certificate I0602 07:54:12.240363 182 manifests.go:130] [control-plane] adding volume "ca-certs" for component "kube-apiserver" I0602 07:54:12.240389 182 manifests.go:130] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver" I0602 07:54:12.240398 182 manifests.go:130] [control-plane] adding volume "k8s-certs" for component "kube-apiserver" I0602 07:54:12.240405 182 manifests.go:130] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver" I0602 07:54:12.240410 182 manifests.go:130] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" I0602 07:54:12.241463 182 manifests.go:159] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml" I0602 07:54:12.241495 182 manifests.go:104] [control-plane] getting StaticPodSpecs I0602 07:54:12.241761 182 manifests.go:130] [control-plane] adding volume "ca-certs" for component "kube-controller-manager" I0602 07:54:12.241785 182 manifests.go:130] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager" I0602 07:54:12.241791 182 manifests.go:130] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager" I0602 07:54:12.241797 182 manifests.go:130] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager" I0602 07:54:12.241803 182 manifests.go:130] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager" I0602 07:54:12.241809 182 manifests.go:130] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager" I0602 07:54:12.241815 182 manifests.go:130] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" I0602 07:54:12.242820 182 manifests.go:159] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml" I0602 07:54:12.242858 182 manifests.go:104] [control-plane] getting StaticPodSpecs I0602 07:54:12.243105 182 manifests.go:130] [control-plane] adding volume "kubeconfig" for component "kube-scheduler" I0602 07:54:12.243763 182 manifests.go:159] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml" I0602 07:54:12.243803 182 kubelet.go:70] Stopping the kubelet [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet I0602 07:54:12.517064 182 loader.go:402] Config loaded from file: /etc/kubernetes/admin.conf I0602 07:54:12.517669 182 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0602 07:54:12.517715 182 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false I0602 07:54:12.517726 182 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false I0602 07:54:12.517735 182 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0602 07:54:12.517744 182 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests" [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s [kubelet-check] The kubelet is healthy after 1.501639007s [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s [control-plane-check] Checking kube-apiserver at https://172.18.0.2:6443/livez [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez I0602 07:54:14.025916 182 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/livez?timeout=10s" status="" milliseconds=1 I0602 07:54:14.526342 182 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/livez?timeout=10s" status="" milliseconds=0 I0602 07:54:15.025831 182 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/livez?timeout=10s" status="" milliseconds=0 I0602 07:54:15.525827 182 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/livez?timeout=10s" status="" milliseconds=0 [control-plane-check] kube-controller-manager is healthy after 3.261010669s I0602 07:54:18.760671 182 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/livez?timeout=10s" status="403 Forbidden" milliseconds=2735 I0602 07:54:18.770095 182 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/livez?timeout=10s" status="403 Forbidden" milliseconds=4 [control-plane-check] kube-scheduler is healthy after 4.834979391s I0602 07:54:19.025910 182 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/livez?timeout=10s" status="403 Forbidden" milliseconds=0 I0602 07:54:19.526785 182 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/livez?timeout=10s" status="403 Forbidden" milliseconds=1 I0602 07:54:20.026437 182 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/livez?timeout=10s" status="403 Forbidden" milliseconds=0 I0602 07:54:20.526200 182 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/livez?timeout=10s" status="200 OK" milliseconds=1 [control-plane-check] kube-apiserver is healthy after 6.501593797s I0602 07:54:20.526897 182 loader.go:402] Config loaded from file: /etc/kubernetes/admin.conf I0602 07:54:20.527801 182 loader.go:402] Config loaded from file: /etc/kubernetes/super-admin.conf I0602 07:54:20.528363 182 kubeconfig.go:665] ensuring that the ClusterRoleBinding for the kubeadm:cluster-admins Group exists I0602 07:54:20.529805 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s" status="403 Forbidden" milliseconds=1 I0602 07:54:20.529932 182 kubeconfig.go:738] creating the ClusterRoleBinding for the kubeadm:cluster-admins Group by using super-admin.conf I0602 07:54:20.540673 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s" status="201 Created" milliseconds=10 I0602 07:54:20.540822 182 uploadconfig.go:112] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0602 07:54:20.546253 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s" status="201 Created" milliseconds=3 I0602 07:54:20.549728 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s" status="201 Created" milliseconds=3 I0602 07:54:20.553038 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s" status="201 Created" milliseconds=3 [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster I0602 07:54:20.553180 182 uploadconfig.go:126] [upload-config] Uploading the kubelet component config to a ConfigMap I0602 07:54:20.557214 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s" status="201 Created" milliseconds=3 I0602 07:54:20.560620 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s" status="201 Created" milliseconds=3 I0602 07:54:20.563689 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s" status="201 Created" milliseconds=2 I0602 07:54:20.563796 182 uploadconfig.go:132] [upload-config] Preserving the CRISocket information for the control-plane node I0602 07:54:20.563814 182 patchnode.go:32] [patchnode] Uploading the CRI socket "unix:///run/containerd/containerd.sock" to Node "kind-control-plane" as an annotation I0602 07:54:20.566368 182 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/api/v1/nodes/kind-control-plane?timeout=10s" status="200 OK" milliseconds=2 I0602 07:54:20.573238 182 round_trippers.go:632] "Response" verb="PATCH" url="https://kind-control-plane:6443/api/v1/nodes/kind-control-plane?timeout=10s" status="200 OK" milliseconds=5 [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node kind-control-plane as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node kind-control-plane as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] I0602 07:54:20.576421 182 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/api/v1/nodes/kind-control-plane?timeout=10s" status="200 OK" milliseconds=2 I0602 07:54:20.585242 182 round_trippers.go:632] "Response" verb="PATCH" url="https://kind-control-plane:6443/api/v1/nodes/kind-control-plane?timeout=10s" status="200 OK" milliseconds=7 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0602 07:54:20.586237 182 loader.go:402] Config loaded from file: /etc/kubernetes/admin.conf I0602 07:54:20.588808 182 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef?timeout=10s" status="404 Not Found" milliseconds=2 I0602 07:54:20.592978 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/api/v1/namespaces/kube-system/secrets?timeout=10s" status="201 Created" milliseconds=3 [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes I0602 07:54:20.596525 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s" status="201 Created" milliseconds=3 I0602 07:54:20.600850 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s" status="201 Created" milliseconds=4 [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0602 07:54:20.604515 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s" status="201 Created" milliseconds=3 [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0602 07:54:20.608307 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s" status="201 Created" milliseconds=3 [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0602 07:54:20.612210 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s" status="201 Created" milliseconds=3 [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0602 07:54:20.612375 182 clusterinfo.go:59] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig I0602 07:54:20.612744 182 clusterinfo.go:71] [bootstrap-token] creating/updating ConfigMap in kube-public namespace I0602 07:54:20.616104 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/api/v1/namespaces/kube-public/configmaps?timeout=10s" status="201 Created" milliseconds=3 I0602 07:54:20.616310 182 clusterinfo.go:85] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace I0602 07:54:20.728676 182 request.go:683] "Waited before sending request" delay="112.213577ms" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles?timeout=10s" I0602 07:54:20.732584 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles?timeout=10s" status="201 Created" milliseconds=3 I0602 07:54:20.929240 182 request.go:683] "Waited before sending request" delay="196.396191ms" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings?timeout=10s" I0602 07:54:20.932548 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings?timeout=10s" status="201 Created" milliseconds=3 [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0602 07:54:20.932709 182 kubeletfinalize.go:91] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem" I0602 07:54:20.933355 182 loader.go:402] Config loaded from file: /etc/kubernetes/kubelet.conf I0602 07:54:20.933902 182 kubeletfinalize.go:145] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation I0602 07:54:21.411158 182 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns" status="200 OK" milliseconds=5 I0602 07:54:21.414219 182 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/coredns?timeout=10s" status="404 Not Found" milliseconds=2 I0602 07:54:21.417943 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s" status="201 Created" milliseconds=3 I0602 07:54:21.422042 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s" status="201 Created" milliseconds=3 I0602 07:54:21.426386 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s" status="201 Created" milliseconds=3 I0602 07:54:21.430306 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s" status="201 Created" milliseconds=3 I0602 07:54:21.436816 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/apps/v1/namespaces/kube-system/deployments?timeout=10s" status="201 Created" milliseconds=5 I0602 07:54:21.447567 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/api/v1/namespaces/kube-system/services?timeout=10s" status="201 Created" milliseconds=9 [addons] Applied essential addon: CoreDNS I0602 07:54:21.452835 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s" status="201 Created" milliseconds=3 I0602 07:54:21.459048 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/apps/v1/namespaces/kube-system/daemonsets?timeout=10s" status="201 Created" milliseconds=5 I0602 07:54:21.543615 182 request.go:683] "Waited before sending request" delay="84.286876ms" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://kind-control-plane:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s" I0602 07:54:21.547789 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s" status="201 Created" milliseconds=3 I0602 07:54:21.552023 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s" status="201 Created" milliseconds=3 I0602 07:54:21.729674 182 request.go:683] "Waited before sending request" delay="177.381184ms" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s" I0602 07:54:21.733049 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s" status="201 Created" milliseconds=3 I0602 07:54:21.929655 182 request.go:683] "Waited before sending request" delay="196.378582ms" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s" I0602 07:54:21.932936 182 round_trippers.go:632] "Response" verb="POST" url="https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s" status="201 Created" milliseconds=3 [addons] Applied essential addon: kube-proxy I0602 07:54:21.933659 182 loader.go:402] Config loaded from file: /etc/kubernetes/admin.conf I0602 07:54:21.934261 182 loader.go:402] Config loaded from file: /etc/kubernetes/admin.conf Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join kind-control-plane:6443 --token \ --discovery-token-ca-cert-hash sha256:75f99fbabd8750fba19b76d06900ab79003d32b7a919e34300f157a62ccc8533 \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join kind-control-plane:6443 --token \ --discovery-token-ca-cert-hash sha256:75f99fbabd8750fba19b76d06900ab79003d32b7a919e34300f157a62ccc8533 ✓ Starting control-plane 🕹️ • Installing CNI 🔌 ... ✓ Installing CNI 🔌 • Installing StorageClass 💾 ... ✓ Installing StorageClass 💾 • Waiting ≤ 1m0s for control-plane = Ready ⏳ ... ✓ Waiting ≤ 1m0s for control-plane = Ready ⏳ • Ready after 19s 💚 Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Have a nice day! 👋 1.6.26-alpine3.19: Pulling from library/memcached 4abcf2066143: Pulling fs layer bf4ca62cc3e9: Pulling fs layer 873fcbe9681f: Pulling fs layer 3d2a65ada68a: Pulling fs layer 7e9ac0902175: Pulling fs layer 8b997d515984: Pulling fs layer 7e9ac0902175: Waiting 3d2a65ada68a: Waiting 8b997d515984: Waiting bf4ca62cc3e9: Verifying Checksum bf4ca62cc3e9: Download complete 873fcbe9681f: Download complete 4abcf2066143: Verifying Checksum 4abcf2066143: Download complete 4abcf2066143: Pull complete bf4ca62cc3e9: Pull complete 873fcbe9681f: Pull complete 7e9ac0902175: Download complete 8b997d515984: Download complete 3d2a65ada68a: Verifying Checksum 3d2a65ada68a: Download complete 3d2a65ada68a: Pull complete 7e9ac0902175: Pull complete 8b997d515984: Pull complete Digest: sha256:8906e7654a202d07e6b1bd6a1382b309e718feb9084987c365337ec8161ccaab Status: Downloaded newer image for memcached:1.6.26-alpine3.19 docker.io/library/memcached:1.6.26-alpine3.19 Image: "memcached:1.6.26-alpine3.19" with ID "sha256:b686fb4e394c572a1980b134344b94d4673fcf49dd492838d7ee6a22c4c73466" not yet present on node "kind-control-plane", loading... 1.36.1: Pulling from library/busybox c464210ed748: Pulling fs layer c464210ed748: Download complete c464210ed748: Pull complete Digest: sha256:7edf5efe6b86dbf01ccc3c76b32a37a8e23b84e6bad81ce8ae8c221fa456fda8 Status: Downloaded newer image for busybox:1.36.1 docker.io/library/busybox:1.36.1 Image: "busybox:1.36.1" with ID "sha256:ae1d923cbe21706d4f9677ce8b05bad652be748ce7695a9137438a1e13bb0066" not yet present on node "kind-control-plane", loading... go: downloading github.com/onsi/ginkgo/v2 v2.23.4 go: downloading github.com/onsi/gomega v1.37.0 go: downloading github.com/google/go-cmp v0.7.0 go: downloading golang.org/x/net v0.40.0 go: downloading gopkg.in/yaml.v3 v3.0.1 === RUN TestE2E Starting grafana plugin kubebuilder suite Running Suite: Kubebuilder grafana plugin e2e suite - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/grafana ================================================================================================================ Random Seed: 1748850901 Will run 1 of 1 specs ------------------------------ kubebuilder /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/grafana/generate_test.go:31 plugin grafana/v1-alpha /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/grafana/generate_test.go:32 should generate a runnable project with grafana plugin /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/grafana/generate_test.go:46 > Enter [BeforeEach] plugin grafana/v1-alpha - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/grafana/generate_test.go:35 @ 06/02/25 07:55:01.061 running: kubectl version -o json cleaning up tools preparing testing directory: /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/grafana/e2e-rntg < Exit [BeforeEach] plugin grafana/v1-alpha - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/grafana/generate_test.go:35 @ 06/02/25 07:55:01.15 (89ms) > Enter [It] should generate a runnable project with grafana plugin - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/grafana/generate_test.go:46 @ 06/02/25 07:55:01.15 STEP: initializing a project - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/grafana/generate_test.go:56 @ 06/02/25 07:55:01.15 running: kubebuilder init --plugins grafana.kubebuilder.io/v1-alpha STEP: verifying the initial template content and updating for real custom metrics - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/grafana/generate_test.go:62 @ 06/02/25 07:55:01.17 STEP: editing a project based on grafana/custom-metrics/config.yaml - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/grafana/generate_test.go:79 @ 06/02/25 07:55:01.17 running: kubebuilder edit --plugins grafana.kubebuilder.io/v1-alpha < Exit [It] should generate a runnable project with grafana plugin - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/grafana/generate_test.go:46 @ 06/02/25 07:55:01.191 (41ms) > Enter [AfterEach] plugin grafana/v1-alpha - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/grafana/generate_test.go:42 @ 06/02/25 07:55:01.191 running: docker rmi -f e2e-test/controller-manager:rntg < Exit [AfterEach] plugin grafana/v1-alpha - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/grafana/generate_test.go:42 @ 06/02/25 07:55:01.217 (26ms) • [0.157 seconds] ------------------------------ Ran 1 of 1 Specs in 0.157 seconds SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestE2E (0.16s) PASS ok sigs.k8s.io/kubebuilder/v4/test/e2e/grafana 0.167s === RUN TestE2E Starting kubebuilder suite Running Suite: Kubebuilder e2e suite - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage ===================================================================================================== Random Seed: 1748850902 Will run 2 of 2 specs ------------------------------ kubebuilder /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:34 deploy image plugin /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:35 should generate a runnable project with deploy-image/v1-alpha options  /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:53 > Enter [BeforeEach] deploy image plugin - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:38 @ 06/02/25 07:55:02.144 running: kubectl version -o json cleaning up tools preparing testing directory: /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/e2e-davp < Exit [BeforeEach] deploy image plugin - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:38 @ 06/02/25 07:55:02.228 (83ms) > Enter [It] should generate a runnable project with deploy-image/v1-alpha options - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:53 @ 06/02/25 07:55:02.228 STEP: initializing a project - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/generate_test.go:72 @ 06/02/25 07:55:02.228 running: kubebuilder init --plugins go/v4 --project-version 3 --domain example.comdavp STEP: creating API definition with deploy-image/v1-alpha plugin - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/generate_test.go:55 @ 06/02/25 07:55:07.694 running: kubebuilder create api --group bardavp --version v1alpha1 --kind Foodavp --plugins deploy-image/v1-alpha --image memcached:1.6.26-alpine3.19 --image-container-port 11211 --image-container-command memcached,--memory-limit=64,-o,modern,-v --run-as-user 1001 --make=false --manifests=false STEP: updating the go.mod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:73 @ 06/02/25 07:55:07.969 running: go mod tidy STEP: run make manifests - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:76 @ 06/02/25 07:55:08.142 running: make manifests STEP: run make generate - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:79 @ 06/02/25 07:55:27.49 running: make generate STEP: run make all - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:82 @ 06/02/25 07:55:29.369 running: make all STEP: run make install - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:85 @ 06/02/25 07:57:08.733 running: make install STEP: building the controller image - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:88 @ 06/02/25 07:57:23.376 running: make docker-build IMG=e2e-test/controller-manager:davp STEP: loading the controller docker image into the kind cluster - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:91 @ 06/02/25 07:58:56.563 running: kind load docker-image e2e-test/controller-manager:davp --name kind STEP: deploying the controller-manager - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:94 @ 06/02/25 07:58:59.841 running: make deploy IMG=e2e-test/controller-manager:davp STEP: validating that the controller-manager pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:100 @ 06/02/25 07:59:01.899 running: kubectl -n e2e-davp-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-davp-system get pods e2e-davp-controller-manager-66d8647f57-6fcvm -o jsonpath={.status.phase} running: kubectl -n e2e-davp-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-davp-system get pods e2e-davp-controller-manager-66d8647f57-6fcvm -o jsonpath={.status.phase} running: kubectl -n e2e-davp-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-davp-system get pods e2e-davp-controller-manager-66d8647f57-6fcvm -o jsonpath={.status.phase} STEP: creating an instance of the CR - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:125 @ 06/02/25 07:59:04.448 running: kubectl -n e2e-davp-system apply -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/e2e-davp/config/samples/bardavp_v1alpha1_foodavp.yaml STEP: validating that pod(s) status.phase=Running - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:136 @ 06/02/25 07:59:04.56 running: kubectl -n e2e-davp-system get pods -l app.kubernetes.io/name=e2e-davp -o jsonpath={.items[*].status} STEP: validating that the status of the custom resource created is updated or not - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:145 @ 06/02/25 07:59:04.659 running: kubectl -n e2e-davp-system get foodavp foodavp-sample -o jsonpath={.status.conditions} STEP: validating the finalizer - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:154 @ 06/02/25 07:59:04.744 running: kubectl -n e2e-davp-system delete -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/e2e-davp/config/samples/bardavp_v1alpha1_foodavp.yaml running: kubectl -n e2e-davp-system get events --field-selector=type=Warning -o jsonpath={.items[*].message} running: kubectl -n e2e-davp-system describe all Name: e2e-davp-controller-manager-66d8647f57-6fcvm Namespace: e2e-davp-system Priority: 0 Service Account: e2e-davp-controller-manager Node: kind-control-plane/172.18.0.2 Start Time: Mon, 02 Jun 2025 07:59:01 +0000 Labels: app.kubernetes.io/name=e2e-davp control-plane=controller-manager pod-template-hash=66d8647f57 Annotations: kubectl.kubernetes.io/default-container: manager Status: Running SeccompProfile: RuntimeDefault IP: 10.244.0.5 IPs: IP: 10.244.0.5 Controlled By: ReplicaSet/e2e-davp-controller-manager-66d8647f57 Containers: manager: Container ID: containerd://a080dfbfc0bc81acbc17655271692944ada743ffd0ab1dfd0e477b21eb7a50f8 Image: e2e-test/controller-manager:davp Image ID: sha256:8fdd3ca681e49eac330a4c607a583054a28feb727bfc535d4824d71f0b337495 Port: Host Port: Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 State: Running Started: Mon, 02 Jun 2025 07:59:02 +0000 Ready: False Restart Count: 0 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: FOODAVP_IMAGE: memcached:1.6.26-alpine3.19 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5pwpm (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-5pwpm: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt Optional: false DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4s default-scheduler Successfully assigned e2e-davp-system/e2e-davp-controller-manager-66d8647f57-6fcvm to kind-control-plane Normal Pulled 3s kubelet Container image "e2e-test/controller-manager:davp" already present on machine Normal Created 3s kubelet Created container: manager Normal Started 3s kubelet Started container manager Name: foodavp-sample-5cb7fdd7c6-7rpzw Namespace: e2e-davp-system Priority: 0 Service Account: default Node: kind-control-plane/172.18.0.2 Start Time: Mon, 02 Jun 2025 07:59:04 +0000 Labels: app.kubernetes.io/managed-by=FoodavpController app.kubernetes.io/name=e2e-davp app.kubernetes.io/version=1.6.26-alpine3.19 pod-template-hash=5cb7fdd7c6 Annotations: Status: Terminating (lasts ) Termination Grace Period: 30s SeccompProfile: RuntimeDefault IP: IPs: Controlled By: ReplicaSet/foodavp-sample-5cb7fdd7c6 Containers: foodavp: Container ID: Image: memcached:1.6.26-alpine3.19 Image ID: Port: 11211/TCP Host Port: 0/TCP Command: memcached --memory-limit=64 -o modern -v State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-62p2b (ro) Conditions: Type Status PodReadyToStartContainers False Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-62p2b: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt Optional: false DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 1s default-scheduler Successfully assigned e2e-davp-system/foodavp-sample-5cb7fdd7c6-7rpzw to kind-control-plane Normal Pulled 0s kubelet Container image "memcached:1.6.26-alpine3.19" already present on machine Normal Created 0s kubelet Created container: foodavp Name: e2e-davp-controller-manager-metrics-service Namespace: e2e-davp-system Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-davp control-plane=controller-manager Annotations: Selector: app.kubernetes.io/name=e2e-davp,control-plane=controller-manager Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.106.19 IPs: 10.96.106.19 Port: https 8443/TCP TargetPort: 8443/TCP Endpoints: Session Affinity: None Internal Traffic Policy: Cluster Events: Name: e2e-davp-controller-manager Namespace: e2e-davp-system CreationTimestamp: Mon, 02 Jun 2025 07:59:01 +0000 Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-davp control-plane=controller-manager Annotations: deployment.kubernetes.io/revision: 1 Selector: app.kubernetes.io/name=e2e-davp,control-plane=controller-manager Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app.kubernetes.io/name=e2e-davp control-plane=controller-manager Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-davp-controller-manager Containers: manager: Image: e2e-test/controller-manager:davp Port: Host Port: Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: FOODAVP_IMAGE: memcached:1.6.26-alpine3.19 Mounts: Volumes: Node-Selectors: Tolerations: Conditions: Type Status Reason ---- ------ ------ Available False MinimumReplicasUnavailable Progressing True ReplicaSetUpdated OldReplicaSets: NewReplicaSet: e2e-davp-controller-manager-66d8647f57 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 4s deployment-controller Scaled up replica set e2e-davp-controller-manager-66d8647f57 from 0 to 1 Name: e2e-davp-controller-manager-66d8647f57 Namespace: e2e-davp-system Selector: app.kubernetes.io/name=e2e-davp,control-plane=controller-manager,pod-template-hash=66d8647f57 Labels: app.kubernetes.io/name=e2e-davp control-plane=controller-manager pod-template-hash=66d8647f57 Annotations: deployment.kubernetes.io/desired-replicas: 1 deployment.kubernetes.io/max-replicas: 2 deployment.kubernetes.io/revision: 1 Controlled By: Deployment/e2e-davp-controller-manager Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app.kubernetes.io/name=e2e-davp control-plane=controller-manager pod-template-hash=66d8647f57 Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-davp-controller-manager Containers: manager: Image: e2e-test/controller-manager:davp Port: Host Port: Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: FOODAVP_IMAGE: memcached:1.6.26-alpine3.19 Mounts: Volumes: Node-Selectors: Tolerations: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 4s replicaset-controller Created pod: e2e-davp-controller-manager-66d8647f57-6fcvm < Exit [It] should generate a runnable project with deploy-image/v1-alpha options - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:53 @ 06/02/25 07:59:05.224 (4m2.996s) > Enter [AfterEach] deploy image plugin - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:45 @ 06/02/25 07:59:05.224 STEP: clean up API objects created during the test - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:46 @ 06/02/25 07:59:05.224 running: make undeploy STEP: removing controller image and working dir - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:49 @ 06/02/25 07:59:11.412 running: docker rmi -f e2e-test/controller-manager:davp < Exit [AfterEach] deploy image plugin - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:45 @ 06/02/25 07:59:11.473 (6.249s) • [249.328 seconds] ------------------------------ kubebuilder /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:34 deploy image plugin /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:35 should generate a runnable project with deploy-image/v1-alpha without options  /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:58 > Enter [BeforeEach] deploy image plugin - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:38 @ 06/02/25 07:59:11.473 running: kubectl version -o json cleaning up tools preparing testing directory: /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/e2e-nobi < Exit [BeforeEach] deploy image plugin - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:38 @ 06/02/25 07:59:11.55 (77ms) > Enter [It] should generate a runnable project with deploy-image/v1-alpha without options - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:58 @ 06/02/25 07:59:11.551 STEP: initializing a project - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/generate_test.go:72 @ 06/02/25 07:59:11.551 running: kubebuilder init --plugins go/v4 --project-version 3 --domain example.comnobi STEP: creating API definition with deploy-image/v1-alpha plugin - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/generate_test.go:39 @ 06/02/25 07:59:11.975 running: kubebuilder create api --group barnobi --version v1alpha1 --kind Foonobi --plugins deploy-image/v1-alpha --image busybox:1.36.1 --run-as-user 1001 --make=false --manifests=false STEP: updating the go.mod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:73 @ 06/02/25 07:59:12.155 running: go mod tidy STEP: run make manifests - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:76 @ 06/02/25 07:59:12.361 running: make manifests STEP: run make generate - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:79 @ 06/02/25 07:59:15.867 running: make generate STEP: run make all - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:82 @ 06/02/25 07:59:17.686 running: make all STEP: run make install - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:85 @ 06/02/25 07:59:27.39 running: make install STEP: building the controller image - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:88 @ 06/02/25 07:59:32.136 running: make docker-build IMG=e2e-test/controller-manager:nobi STEP: loading the controller docker image into the kind cluster - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:91 @ 06/02/25 08:00:29.203 running: kind load docker-image e2e-test/controller-manager:nobi --name kind STEP: deploying the controller-manager - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:94 @ 06/02/25 08:00:32.124 running: make deploy IMG=e2e-test/controller-manager:nobi STEP: validating that the controller-manager pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:100 @ 06/02/25 08:00:34.243 running: kubectl -n e2e-nobi-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-nobi-system get pods e2e-nobi-controller-manager-5df7565fc4-qqmkg -o jsonpath={.status.phase} running: kubectl -n e2e-nobi-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-nobi-system get pods e2e-nobi-controller-manager-5df7565fc4-qqmkg -o jsonpath={.status.phase} STEP: creating an instance of the CR - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:125 @ 06/02/25 08:00:35.606 running: kubectl -n e2e-nobi-system apply -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/e2e-nobi/config/samples/barnobi_v1alpha1_foonobi.yaml STEP: validating that pod(s) status.phase=Running - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:136 @ 06/02/25 08:00:35.717 running: kubectl -n e2e-nobi-system get pods -l app.kubernetes.io/name=e2e-nobi -o jsonpath={.items[*].status} STEP: validating that the status of the custom resource created is updated or not - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:145 @ 06/02/25 08:00:35.815 running: kubectl -n e2e-nobi-system get foonobi foonobi-sample -o jsonpath={.status.conditions} STEP: validating the finalizer - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:154 @ 06/02/25 08:00:35.903 running: kubectl -n e2e-nobi-system delete -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/e2e-nobi/config/samples/barnobi_v1alpha1_foonobi.yaml running: kubectl -n e2e-nobi-system get events --field-selector=type=Warning -o jsonpath={.items[*].message} running: kubectl -n e2e-nobi-system describe all Name: e2e-nobi-controller-manager-5df7565fc4-qqmkg Namespace: e2e-nobi-system Priority: 0 Service Account: e2e-nobi-controller-manager Node: kind-control-plane/172.18.0.2 Start Time: Mon, 02 Jun 2025 08:00:34 +0000 Labels: app.kubernetes.io/name=e2e-nobi control-plane=controller-manager pod-template-hash=5df7565fc4 Annotations: kubectl.kubernetes.io/default-container: manager Status: Running SeccompProfile: RuntimeDefault IP: 10.244.0.7 IPs: IP: 10.244.0.7 Controlled By: ReplicaSet/e2e-nobi-controller-manager-5df7565fc4 Containers: manager: Container ID: containerd://2b5100de9f30412fe110248419fb5512e623d86dfa2f14fcb69b1574af6bc81c Image: e2e-test/controller-manager:nobi Image ID: sha256:08c9b35cc77bb0c229f390dd24de1f41fe1731af27094d73cde28ccfac82b5fb Port: Host Port: Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 State: Running Started: Mon, 02 Jun 2025 08:00:35 +0000 Ready: False Restart Count: 0 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: FOONOBI_IMAGE: busybox:1.36.1 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rlcj7 (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-rlcj7: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt Optional: false DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2s default-scheduler Successfully assigned e2e-nobi-system/e2e-nobi-controller-manager-5df7565fc4-qqmkg to kind-control-plane Normal Pulled 2s kubelet Container image "e2e-test/controller-manager:nobi" already present on machine Normal Created 2s kubelet Created container: manager Normal Started 1s kubelet Started container manager Name: foonobi-sample-5cb9696d68-gcgwm Namespace: e2e-nobi-system Priority: 0 Service Account: default Node: kind-control-plane/172.18.0.2 Start Time: Mon, 02 Jun 2025 08:00:35 +0000 Labels: app.kubernetes.io/managed-by=FoonobiController app.kubernetes.io/name=e2e-nobi app.kubernetes.io/version=1.36.1 pod-template-hash=5cb9696d68 Annotations: Status: Terminating (lasts ) Termination Grace Period: 30s SeccompProfile: RuntimeDefault IP: IPs: Controlled By: ReplicaSet/foonobi-sample-5cb9696d68 Containers: foonobi: Container ID: Image: busybox:1.36.1 Image ID: Port: Host Port: State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t7mm4 (ro) Conditions: Type Status PodReadyToStartContainers False Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-t7mm4: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt Optional: false DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 1s default-scheduler Successfully assigned e2e-nobi-system/foonobi-sample-5cb9696d68-gcgwm to kind-control-plane Normal Pulled 0s kubelet Container image "busybox:1.36.1" already present on machine Normal Created 0s kubelet Created container: foonobi Name: e2e-nobi-controller-manager-metrics-service Namespace: e2e-nobi-system Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-nobi control-plane=controller-manager Annotations: Selector: app.kubernetes.io/name=e2e-nobi,control-plane=controller-manager Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.191.248 IPs: 10.96.191.248 Port: https 8443/TCP TargetPort: 8443/TCP Endpoints: Session Affinity: None Internal Traffic Policy: Cluster Events: Name: e2e-nobi-controller-manager Namespace: e2e-nobi-system CreationTimestamp: Mon, 02 Jun 2025 08:00:34 +0000 Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-nobi control-plane=controller-manager Annotations: deployment.kubernetes.io/revision: 1 Selector: app.kubernetes.io/name=e2e-nobi,control-plane=controller-manager Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app.kubernetes.io/name=e2e-nobi control-plane=controller-manager Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-nobi-controller-manager Containers: manager: Image: e2e-test/controller-manager:nobi Port: Host Port: Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: FOONOBI_IMAGE: busybox:1.36.1 Mounts: Volumes: Node-Selectors: Tolerations: Conditions: Type Status Reason ---- ------ ------ Available False MinimumReplicasUnavailable Progressing True ReplicaSetUpdated OldReplicaSets: NewReplicaSet: e2e-nobi-controller-manager-5df7565fc4 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 2s deployment-controller Scaled up replica set e2e-nobi-controller-manager-5df7565fc4 from 0 to 1 Name: e2e-nobi-controller-manager-5df7565fc4 Namespace: e2e-nobi-system Selector: app.kubernetes.io/name=e2e-nobi,control-plane=controller-manager,pod-template-hash=5df7565fc4 Labels: app.kubernetes.io/name=e2e-nobi control-plane=controller-manager pod-template-hash=5df7565fc4 Annotations: deployment.kubernetes.io/desired-replicas: 1 deployment.kubernetes.io/max-replicas: 2 deployment.kubernetes.io/revision: 1 Controlled By: Deployment/e2e-nobi-controller-manager Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app.kubernetes.io/name=e2e-nobi control-plane=controller-manager pod-template-hash=5df7565fc4 Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-nobi-controller-manager Containers: manager: Image: e2e-test/controller-manager:nobi Port: Host Port: Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: FOONOBI_IMAGE: busybox:1.36.1 Mounts: Volumes: Node-Selectors: Tolerations: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 2s replicaset-controller Created pod: e2e-nobi-controller-manager-5df7565fc4-qqmkg < Exit [It] should generate a runnable project with deploy-image/v1-alpha without options - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:58 @ 06/02/25 08:00:36.437 (1m24.886s) > Enter [AfterEach] deploy image plugin - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:45 @ 06/02/25 08:00:36.437 STEP: clean up API objects created during the test - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:46 @ 06/02/25 08:00:36.437 running: make undeploy STEP: removing controller image and working dir - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:49 @ 06/02/25 08:00:42.626 running: docker rmi -f e2e-test/controller-manager:nobi < Exit [AfterEach] deploy image plugin - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/deployimage/plugin_cluster_test.go:45 @ 06/02/25 08:00:42.688 (6.251s) • [91.215 seconds] ------------------------------ Ran 2 of 2 Specs in 340.544 seconds SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestE2E (340.55s) PASS ok sigs.k8s.io/kubebuilder/v4/test/e2e/deployimage 340.553s === RUN TestE2E Starting kubebuilder suite Running Suite: Kubebuilder e2e suite - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4 ============================================================================================ Random Seed: 1748851243 Will run 7 of 7 specs ------------------------------ [BeforeSuite]  /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e_suite_test.go:38 > Enter [BeforeSuite] TOP-LEVEL - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e_suite_test.go:38 @ 06/02/25 08:00:43.54 running: kubectl version -o json cleaning up tools preparing testing directory: /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-mpxy STEP: installing the cert-manager bundle - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e_suite_test.go:45 @ 06/02/25 08:00:43.62 running: kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.3/cert-manager.yaml --validate=false running: kubectl wait deployment.apps/cert-manager-webhook --for condition=Available --namespace cert-manager --timeout 5m STEP: installing the Prometheus operator - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e_suite_test.go:48 @ 06/02/25 08:00:53.633 running: kubectl create -f https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.77.1/bundle.yaml < Exit [BeforeSuite] TOP-LEVEL - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e_suite_test.go:38 @ 06/02/25 08:00:56.016 (12.476s) [BeforeSuite] PASSED [12.476 seconds] ------------------------------ kubebuilder /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:48 plugin go/v4 /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:49 should generate a runnable project /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:69 > Enter [BeforeEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:52 @ 06/02/25 08:00:56.016 running: kubectl version -o json cleaning up tools preparing testing directory: /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-gpqh < Exit [BeforeEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:52 @ 06/02/25 08:00:56.119 (103ms) > Enter [It] should generate a runnable project - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:69 @ 06/02/25 08:00:56.119 STEP: initializing a project - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:232 @ 06/02/25 08:00:56.119 running: kubebuilder init --plugins go/v4 --project-version 3 --domain example.comgpqh STEP: creating API definition - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:209 @ 06/02/25 08:00:56.939 running: kubebuilder create api --group bargpqh --version v1alpha1 --kind Foogpqh --namespaced --resource --controller --make=false STEP: implementing the API - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:221 @ 06/02/25 08:00:57.103 STEP: scaffolding mutating and validating webhooks - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:36 @ 06/02/25 08:00:57.103 running: kubebuilder create webhook --group bargpqh --version v1alpha1 --kind Foogpqh --defaulting --programmatic-validation --make=false STEP: implementing the mutating and validating webhooks - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:47 @ 06/02/25 08:00:57.892 STEP: scaffolding conversion webhooks for testing ConversionTest v1 to v2 conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:380 @ 06/02/25 08:00:57.892 running: kubebuilder create api --group bargpqh --version v1 --kind ConversionTest --controller=true --resource=true --make=false running: kubebuilder create api --group bargpqh --version v2 --kind ConversionTest --controller=false --resource=true --make=false STEP: setting up the conversion webhook for v1 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:405 @ 06/02/25 08:00:58.354 running: kubebuilder create webhook --group bargpqh --version v1 --kind ConversionTest --conversion --spoke v2 --make=false STEP: implementing the size spec in v1 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:417 @ 06/02/25 08:00:58.556 STEP: implementing the replicas spec in v2 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:425 @ 06/02/25 08:00:58.556 STEP: creating manager namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:114 @ 06/02/25 08:00:58.559 running: kubectl create ns e2e-gpqh-system STEP: labeling the namespace to enforce the restricted security policy - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:118 @ 06/02/25 08:00:58.642 running: kubectl label --overwrite ns e2e-gpqh-system pod-security.kubernetes.io/enforce=restricted STEP: updating the go.mod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:122 @ 06/02/25 08:00:58.736 running: go mod tidy STEP: run make all - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:126 @ 06/02/25 08:00:58.917 running: make all STEP: building the controller image - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:130 @ 06/02/25 08:01:10.504 running: make docker-build IMG=e2e-test/controller-manager:gpqh STEP: loading the controller docker image into the kind cluster - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:134 @ 06/02/25 08:02:11.817 running: kind load docker-image e2e-test/controller-manager:gpqh --name kind STEP: deploying the controller-manager - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:139 @ 06/02/25 08:02:14.696 running: make deploy IMG=e2e-test/controller-manager:gpqh STEP: Checking controllerManager and getting the name of the Pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:180 @ 06/02/25 08:02:20.676 STEP: validating that the controller-manager pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:433 @ 06/02/25 08:02:20.677 running: kubectl -n e2e-gpqh-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-gpqh-system get pods e2e-gpqh-controller-manager-659d4b87c5-rbd6g -o jsonpath={.status.phase} running: kubectl -n e2e-gpqh-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-gpqh-system get pods e2e-gpqh-controller-manager-659d4b87c5-rbd6g -o jsonpath={.status.phase} running: kubectl -n e2e-gpqh-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-gpqh-system get pods e2e-gpqh-controller-manager-659d4b87c5-rbd6g -o jsonpath={.status.phase} running: kubectl -n e2e-gpqh-system describe all Name: e2e-gpqh-controller-manager-659d4b87c5-rbd6g Namespace: e2e-gpqh-system Priority: 0 Service Account: e2e-gpqh-controller-manager Node: kind-control-plane/172.18.0.2 Start Time: Mon, 02 Jun 2025 08:02:20 +0000 Labels: app.kubernetes.io/name=e2e-gpqh control-plane=controller-manager pod-template-hash=659d4b87c5 Annotations: kubectl.kubernetes.io/default-container: manager Status: Running SeccompProfile: RuntimeDefault IP: 10.244.0.13 IPs: IP: 10.244.0.13 Controlled By: ReplicaSet/e2e-gpqh-controller-manager-659d4b87c5 Containers: manager: Container ID: containerd://cd5d407b74752dc4d65b74487230f94d657e9dfbb43f4819b5c112501fe031a4 Image: e2e-test/controller-manager:gpqh Image ID: sha256:f67dce8bf3484cf7beced80c8f617fd5797d7be84133b5ed9e5fd0829b4e3529 Port: 9443/TCP Host Port: 0/TCP Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 --metrics-cert-path=/tmp/k8s-metrics-server/metrics-certs --webhook-cert-path=/tmp/k8s-webhook-server/serving-certs State: Running Started: Mon, 02 Jun 2025 08:02:22 +0000 Ready: False Restart Count: 0 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /tmp/k8s-metrics-server/metrics-certs from metrics-certs (ro) /tmp/k8s-webhook-server/serving-certs from webhook-certs (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rndvf (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready False ContainersReady False PodScheduled True Volumes: metrics-certs: Type: Secret (a volume populated by a Secret) SecretName: metrics-server-cert Optional: false webhook-certs: Type: Secret (a volume populated by a Secret) SecretName: webhook-server-cert Optional: false kube-api-access-rndvf: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt Optional: false DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3s default-scheduler Successfully assigned e2e-gpqh-system/e2e-gpqh-controller-manager-659d4b87c5-rbd6g to kind-control-plane Warning FailedMount 3s kubelet MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found Warning FailedMount 3s kubelet MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found Normal Pulled 2s kubelet Container image "e2e-test/controller-manager:gpqh" already present on machine Normal Created 2s kubelet Created container: manager Normal Started 1s kubelet Started container manager Name: e2e-gpqh-controller-manager-metrics-service Namespace: e2e-gpqh-system Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-gpqh control-plane=controller-manager Annotations: Selector: app.kubernetes.io/name=e2e-gpqh,control-plane=controller-manager Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.62.68 IPs: 10.96.62.68 Port: https 8443/TCP TargetPort: 8443/TCP Endpoints: Session Affinity: None Internal Traffic Policy: Cluster Events: Name: e2e-gpqh-webhook-service Namespace: e2e-gpqh-system Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-gpqh Annotations: Selector: app.kubernetes.io/name=e2e-gpqh,control-plane=controller-manager Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.98.145 IPs: 10.96.98.145 Port: 443/TCP TargetPort: 9443/TCP Endpoints: Session Affinity: None Internal Traffic Policy: Cluster Events: Name: e2e-gpqh-controller-manager Namespace: e2e-gpqh-system CreationTimestamp: Mon, 02 Jun 2025 08:02:20 +0000 Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-gpqh control-plane=controller-manager Annotations: deployment.kubernetes.io/revision: 1 Selector: app.kubernetes.io/name=e2e-gpqh,control-plane=controller-manager Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app.kubernetes.io/name=e2e-gpqh control-plane=controller-manager Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-gpqh-controller-manager Containers: manager: Image: e2e-test/controller-manager:gpqh Port: 9443/TCP Host Port: 0/TCP Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 --metrics-cert-path=/tmp/k8s-metrics-server/metrics-certs --webhook-cert-path=/tmp/k8s-webhook-server/serving-certs Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /tmp/k8s-metrics-server/metrics-certs from metrics-certs (ro) /tmp/k8s-webhook-server/serving-certs from webhook-certs (ro) Volumes: metrics-certs: Type: Secret (a volume populated by a Secret) SecretName: metrics-server-cert Optional: false webhook-certs: Type: Secret (a volume populated by a Secret) SecretName: webhook-server-cert Optional: false Node-Selectors: Tolerations: Conditions: Type Status Reason ---- ------ ------ Available False MinimumReplicasUnavailable Progressing True ReplicaSetUpdated OldReplicaSets: NewReplicaSet: e2e-gpqh-controller-manager-659d4b87c5 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 3s deployment-controller Scaled up replica set e2e-gpqh-controller-manager-659d4b87c5 from 0 to 1 Name: e2e-gpqh-controller-manager-659d4b87c5 Namespace: e2e-gpqh-system Selector: app.kubernetes.io/name=e2e-gpqh,control-plane=controller-manager,pod-template-hash=659d4b87c5 Labels: app.kubernetes.io/name=e2e-gpqh control-plane=controller-manager pod-template-hash=659d4b87c5 Annotations: deployment.kubernetes.io/desired-replicas: 1 deployment.kubernetes.io/max-replicas: 2 deployment.kubernetes.io/revision: 1 Controlled By: Deployment/e2e-gpqh-controller-manager Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app.kubernetes.io/name=e2e-gpqh control-plane=controller-manager pod-template-hash=659d4b87c5 Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-gpqh-controller-manager Containers: manager: Image: e2e-test/controller-manager:gpqh Port: 9443/TCP Host Port: 0/TCP Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 --metrics-cert-path=/tmp/k8s-metrics-server/metrics-certs --webhook-cert-path=/tmp/k8s-webhook-server/serving-certs Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /tmp/k8s-metrics-server/metrics-certs from metrics-certs (ro) /tmp/k8s-webhook-server/serving-certs from webhook-certs (ro) Volumes: metrics-certs: Type: Secret (a volume populated by a Secret) SecretName: metrics-server-cert Optional: false webhook-certs: Type: Secret (a volume populated by a Secret) SecretName: webhook-server-cert Optional: false Node-Selectors: Tolerations: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 3s replicaset-controller Created pod: e2e-gpqh-controller-manager-659d4b87c5-rbd6g STEP: Checking if all flags are applied to the manager pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:183 @ 06/02/25 08:02:23.389 running: kubectl -n e2e-gpqh-system get pod e2e-gpqh-controller-manager-659d4b87c5-rbd6g -o jsonpath={.spec.containers[0].args} STEP: validating that the Prometheus manager has provisioned the Service - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:195 @ 06/02/25 08:02:23.475 running: kubectl get Service prometheus-operator STEP: validating that the ServiceMonitor for Prometheus is applied in the namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:203 @ 06/02/25 08:02:23.56 running: kubectl -n e2e-gpqh-system get ServiceMonitor STEP: validating that cert-manager has provisioned the certificate Secret - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:247 @ 06/02/25 08:02:23.647 running: kubectl -n e2e-gpqh-system get secrets webhook-server-cert STEP: validating that the mutating|validating webhooks have the CA injected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:260 @ 06/02/25 08:02:23.731 running: kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io e2e-gpqh-mutating-webhook-configuration -o go-template={{ range .webhooks }}{{ .clientConfig.caBundle }}{{ end }} running: kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io e2e-gpqh-validating-webhook-configuration -o go-template={{ range .webhooks }}{{ .clientConfig.caBundle }}{{ end }} STEP: validating that the CA injection is applied for CRD conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:284 @ 06/02/25 08:02:23.932 running: kubectl get customresourcedefinition.apiextensions.k8s.io -o jsonpath={.items[?(@.spec.names.kind=='ConversionTest')].spec.conversion.webhook.clientConfig.caBundle} STEP: creating an instance of the CR - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:306 @ 06/02/25 08:02:24.851 running: kubectl -n e2e-gpqh-system apply -f config/samples/bargpqh_v1alpha1_foogpqh.yaml running: kubectl -n e2e-gpqh-system apply -f config/samples/bargpqh_v1alpha1_foogpqh.yaml running: kubectl -n e2e-gpqh-system apply -f config/samples/bargpqh_v1alpha1_foogpqh.yaml running: kubectl -n e2e-gpqh-system apply -f config/samples/bargpqh_v1alpha1_foogpqh.yaml running: kubectl -n e2e-gpqh-system apply -f config/samples/bargpqh_v1alpha1_foogpqh.yaml running: kubectl -n e2e-gpqh-system apply -f config/samples/bargpqh_v1alpha1_foogpqh.yaml running: kubectl -n e2e-gpqh-system apply -f config/samples/bargpqh_v1alpha1_foogpqh.yaml running: kubectl -n e2e-gpqh-system apply -f config/samples/bargpqh_v1alpha1_foogpqh.yaml running: kubectl -n e2e-gpqh-system apply -f config/samples/bargpqh_v1alpha1_foogpqh.yaml running: kubectl -n e2e-gpqh-system apply -f config/samples/bargpqh_v1alpha1_foogpqh.yaml STEP: checking the metrics values to validate that the created resource object gets reconciled - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:330 @ 06/02/25 08:02:34.815 running: kubectl get clusterrolebinding metrics-gpqh running: kubectl create clusterrolebinding metrics-gpqh --clusterrole=e2e-gpqh-metrics-reader --serviceaccount=e2e-gpqh-system:e2e-gpqh-controller-manager running: kubectl create --raw /api/v1/namespaces/e2e-gpqh-system/serviceaccounts/e2e-gpqh-controller-manager/token -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-gpqh/e2e-gpqh-controller-manager-token-request STEP: validating that the controller-manager service is available - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:491 @ 06/02/25 08:02:35.057 running: kubectl -n e2e-gpqh-system get service e2e-gpqh-controller-manager-metrics-service STEP: ensuring the service endpoint is ready - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:498 @ 06/02/25 08:02:35.143 running: kubectl -n e2e-gpqh-system get endpoints e2e-gpqh-controller-manager-metrics-service -o jsonpath={.subsets[*].addresses[*].ip} STEP: creating a curl pod to access the metrics endpoint - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:512 @ 06/02/25 08:02:35.229 running: kubectl -n e2e-gpqh-system run curl --restart=Never --namespace e2e-gpqh-system --image=curlimages/curl:latest --overrides { "spec": { "containers": [{ "name": "curl", "image": "curlimages/curl:latest", "command": ["/bin/sh", "-c"], "args": ["curl -v -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1JN1BlSWx0ZEttUFJLRURHWnlpN3Fhamh6YkdDMWJHYUIxU1NaUFExbVkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ4ODU0OTU1LCJpYXQiOjE3NDg4NTEzNTUsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMTM4ZTdiODktODFkZS00MmJkLWExOTgtODkyM2U2NDhjYmRhIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJlMmUtZ3BxaC1zeXN0ZW0iLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZTJlLWdwcWgtY29udHJvbGxlci1tYW5hZ2VyIiwidWlkIjoiN2YxOGYzNjYtZGNjNy00NTM4LWI0NjQtOTI0MDkyZTQxMzA0In19LCJuYmYiOjE3NDg4NTEzNTUsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDplMmUtZ3BxaC1zeXN0ZW06ZTJlLWdwcWgtY29udHJvbGxlci1tYW5hZ2VyIn0.R7WahkjXxYAd0h6KC2zIWLt4KrYLQz7myfzAV9JWY7yZw45Vreyxq51DqN1Q6QWVN6ITv5uaUDUxD-W7ArRZ9HvUjEilGwnZwLTcdPdGIPVIyyGR1l3_BANgOiZhfdPLJ3ktudh_MVWU8JlZoCIPCwhlf3ZuWl6YCqGfIewGGH6Fnmxi8H93QduFfb-pxTf1XC8yWx0M0zA1FBu9JahrTYlDQ7aszZh6eqO-DF_Tq8T6jJiPXH5lrHM0rM38tVfXtWrTYWu2F6qVrARM_kjVwqjYMecOMfoWUbMYx6hcUtq2jCjmyIipBwqbihTckY8RAzwBE85Ao7Ns50g2cOaQjg' https://e2e-gpqh-controller-manager-metrics-service.e2e-gpqh-system.svc.cluster.local:8443/metrics"], "securityContext": { "allowPrivilegeEscalation": false, "capabilities": { "drop": ["ALL"] }, "runAsNonRoot": true, "runAsUser": 1000, "seccompProfile": { "type": "RuntimeDefault" } } }], "serviceAccountName": "e2e-gpqh-controller-manager" } } STEP: validating that the curl pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:517 @ 06/02/25 08:02:35.319 running: kubectl -n e2e-gpqh-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-gpqh-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-gpqh-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-gpqh-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-gpqh-system get pods curl -o jsonpath={.status.phase} STEP: validating that the metrics endpoint is serving as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:528 @ 06/02/25 08:02:39.754 running: kubectl -n e2e-gpqh-system logs curl STEP: cleaning up the curl pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:611 @ 06/02/25 08:02:39.884 running: kubectl -n e2e-gpqh-system delete pods/curl STEP: validating that mutating and validating webhooks are working fine - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:344 @ 06/02/25 08:02:39.975 running: kubectl -n e2e-gpqh-system get -f config/samples/bargpqh_v1alpha1_foogpqh.yaml -o go-template={{ .spec.count }} STEP: creating a namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:356 @ 06/02/25 08:02:40.056 running: kubectl create namespace test-webhooks STEP: applying the CR in the created namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:361 @ 06/02/25 08:02:40.133 running: kubectl apply -n test-webhooks -f config/samples/bargpqh_v1alpha1_foogpqh.yaml STEP: validating that mutating webhooks are working fine outside of the manager's namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:369 @ 06/02/25 08:02:40.232 running: kubectl get -n test-webhooks -f config/samples/bargpqh_v1alpha1_foogpqh.yaml -o go-template={{ .spec.count }} STEP: removing the namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:382 @ 06/02/25 08:02:40.313 running: kubectl delete namespace test-webhooks STEP: validating the conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:386 @ 06/02/25 08:02:45.653 STEP: modifying the ConversionTest CR sample to set `size` for conversion testing - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:389 @ 06/02/25 08:02:45.653 STEP: applying the modified ConversionTest CR in v1 for conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:399 @ 06/02/25 08:02:45.653 running: kubectl -n e2e-gpqh-system apply -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-gpqh/config/samples/bargpqh_v1_conversiontest.yaml STEP: waiting for the ConversionTest CR to appear - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:403 @ 06/02/25 08:02:45.752 running: kubectl -n e2e-gpqh-system get conversiontest conversiontest-sample STEP: validating that the converted resource in v2 has replicas == 3 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:409 @ 06/02/25 08:02:45.84 running: kubectl -n e2e-gpqh-system get conversiontest conversiontest-sample -o jsonpath={.spec.replicas} STEP: validating conversion metrics to confirm conversion operations - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:423 @ 06/02/25 08:02:45.926 running: kubectl get clusterrolebinding metrics-gpqh running: kubectl create --raw /api/v1/namespaces/e2e-gpqh-system/serviceaccounts/e2e-gpqh-controller-manager/token -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-gpqh/e2e-gpqh-controller-manager-token-request STEP: validating that the controller-manager service is available - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:491 @ 06/02/25 08:02:46.09 running: kubectl -n e2e-gpqh-system get service e2e-gpqh-controller-manager-metrics-service STEP: ensuring the service endpoint is ready - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:498 @ 06/02/25 08:02:46.175 running: kubectl -n e2e-gpqh-system get endpoints e2e-gpqh-controller-manager-metrics-service -o jsonpath={.subsets[*].addresses[*].ip} STEP: creating a curl pod to access the metrics endpoint - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:512 @ 06/02/25 08:02:46.259 running: kubectl -n e2e-gpqh-system run curl --restart=Never --namespace e2e-gpqh-system --image=curlimages/curl:latest --overrides { "spec": { "containers": [{ "name": "curl", "image": "curlimages/curl:latest", "command": ["/bin/sh", "-c"], "args": ["curl -v -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1JN1BlSWx0ZEttUFJLRURHWnlpN3Fhamh6YkdDMWJHYUIxU1NaUFExbVkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ4ODU0OTY2LCJpYXQiOjE3NDg4NTEzNjYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiZDc5MDM2YzEtNTZmNC00YmVjLTk3NTgtMzRlOTk0OTNlODU3Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJlMmUtZ3BxaC1zeXN0ZW0iLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZTJlLWdwcWgtY29udHJvbGxlci1tYW5hZ2VyIiwidWlkIjoiN2YxOGYzNjYtZGNjNy00NTM4LWI0NjQtOTI0MDkyZTQxMzA0In19LCJuYmYiOjE3NDg4NTEzNjYsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDplMmUtZ3BxaC1zeXN0ZW06ZTJlLWdwcWgtY29udHJvbGxlci1tYW5hZ2VyIn0.pu4QjN7-SOXEIhh7i-K7etc705SIS5WgaIaHIg692S55Nx4vtBtJJHxfLTdMTyrCR4VhI9a5140zXvNLP74e8o_J-fr1DkFTcxl_Qln6GSOCtqM_FlRyrqquSvY2yaLGP1de8tWmJ4t_IbE6DWYqn02BQ8RShC9nrrShFRbOI55Bb-Da7LO7_1L5BjAcrJ8kAQLjDzTRW8RDUNlqSA3AlfDmSVGUUPoeJWARcfNmlOTP37ro3FKW2qgpXWQcZgKHE7IExyUEj0vhhVNQaMq3QT8yY8-dmSDSmRxu7KYhjqxQZmqqHV4CQ1K1_To_y8tCEXNEWDlT2h5hzsGnkQD0yg' https://e2e-gpqh-controller-manager-metrics-service.e2e-gpqh-system.svc.cluster.local:8443/metrics"], "securityContext": { "allowPrivilegeEscalation": false, "capabilities": { "drop": ["ALL"] }, "runAsNonRoot": true, "runAsUser": 1000, "seccompProfile": { "type": "RuntimeDefault" } } }], "serviceAccountName": "e2e-gpqh-controller-manager" } } STEP: validating that the curl pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:517 @ 06/02/25 08:02:46.346 running: kubectl -n e2e-gpqh-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-gpqh-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-gpqh-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-gpqh-system get pods curl -o jsonpath={.status.phase} STEP: validating that the metrics endpoint is serving as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:528 @ 06/02/25 08:02:49.687 running: kubectl -n e2e-gpqh-system logs curl STEP: cleaning up the curl pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:611 @ 06/02/25 08:02:49.808 running: kubectl -n e2e-gpqh-system delete pods/curl < Exit [It] should generate a runnable project - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:69 @ 06/02/25 08:02:49.897 (1m53.777s) > Enter [AfterEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:59 @ 06/02/25 08:02:49.897 STEP: By removing restricted namespace label - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:60 @ 06/02/25 08:02:49.897 running: kubectl label ns e2e-gpqh-system pod-security.kubernetes.io/enforce- STEP: clean up API objects created during the test - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:63 @ 06/02/25 08:02:49.986 running: make undeploy STEP: removing controller image and working dir - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:66 @ 06/02/25 08:02:59.237 running: docker rmi -f e2e-test/controller-manager:gpqh < Exit [AfterEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:59 @ 06/02/25 08:02:59.309 (9.412s) • [123.293 seconds] ------------------------------ kubebuilder /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:48 plugin go/v4 /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:49 should generate a runnable project with the Installer /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:73 > Enter [BeforeEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:52 @ 06/02/25 08:02:59.309 running: kubectl version -o json cleaning up tools preparing testing directory: /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-cevo < Exit [BeforeEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:52 @ 06/02/25 08:02:59.403 (93ms) > Enter [It] should generate a runnable project with the Installer - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:73 @ 06/02/25 08:02:59.403 STEP: initializing a project - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:232 @ 06/02/25 08:02:59.403 running: kubebuilder init --plugins go/v4 --project-version 3 --domain example.comcevo STEP: creating API definition - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:209 @ 06/02/25 08:03:00.068 running: kubebuilder create api --group barcevo --version v1alpha1 --kind Foocevo --namespaced --resource --controller --make=false STEP: implementing the API - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:221 @ 06/02/25 08:03:00.22 STEP: scaffolding mutating and validating webhooks - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:36 @ 06/02/25 08:03:00.221 running: kubebuilder create webhook --group barcevo --version v1alpha1 --kind Foocevo --defaulting --programmatic-validation --make=false STEP: implementing the mutating and validating webhooks - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:47 @ 06/02/25 08:03:00.819 STEP: scaffolding conversion webhooks for testing ConversionTest v1 to v2 conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:380 @ 06/02/25 08:03:00.819 running: kubebuilder create api --group barcevo --version v1 --kind ConversionTest --controller=true --resource=true --make=false running: kubebuilder create api --group barcevo --version v2 --kind ConversionTest --controller=false --resource=true --make=false STEP: setting up the conversion webhook for v1 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:405 @ 06/02/25 08:03:01.184 running: kubebuilder create webhook --group barcevo --version v1 --kind ConversionTest --conversion --spoke v2 --make=false STEP: implementing the size spec in v1 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:417 @ 06/02/25 08:03:01.423 STEP: implementing the replicas spec in v2 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:425 @ 06/02/25 08:03:01.423 STEP: creating manager namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:114 @ 06/02/25 08:03:01.426 running: kubectl create ns e2e-cevo-system STEP: labeling the namespace to enforce the restricted security policy - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:118 @ 06/02/25 08:03:01.506 running: kubectl label --overwrite ns e2e-cevo-system pod-security.kubernetes.io/enforce=restricted STEP: updating the go.mod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:122 @ 06/02/25 08:03:01.597 running: go mod tidy STEP: run make all - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:126 @ 06/02/25 08:03:01.76 running: make all STEP: building the controller image - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:130 @ 06/02/25 08:03:13.13 running: make docker-build IMG=e2e-test/controller-manager:cevo STEP: loading the controller docker image into the kind cluster - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:134 @ 06/02/25 08:04:08.254 running: kind load docker-image e2e-test/controller-manager:cevo --name kind STEP: building the installer - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:146 @ 06/02/25 08:04:11.478 running: make build-installer IMG=e2e-test/controller-manager:cevo STEP: deploying the controller-manager with the installer - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:150 @ 06/02/25 08:04:18.187 running: kubectl -n e2e-cevo-system apply -f dist/install.yaml STEP: Checking controllerManager and getting the name of the Pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:180 @ 06/02/25 08:04:19.353 STEP: validating that the controller-manager pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:433 @ 06/02/25 08:04:19.353 running: kubectl -n e2e-cevo-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-cevo-system get pods e2e-cevo-controller-manager-cbd6c5c75-h8wwf -o jsonpath={.status.phase} running: kubectl -n e2e-cevo-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-cevo-system get pods e2e-cevo-controller-manager-cbd6c5c75-h8wwf -o jsonpath={.status.phase} running: kubectl -n e2e-cevo-system describe all Name: e2e-cevo-controller-manager-cbd6c5c75-h8wwf Namespace: e2e-cevo-system Priority: 0 Service Account: e2e-cevo-controller-manager Node: kind-control-plane/172.18.0.2 Start Time: Mon, 02 Jun 2025 08:04:19 +0000 Labels: app.kubernetes.io/name=e2e-cevo control-plane=controller-manager pod-template-hash=cbd6c5c75 Annotations: kubectl.kubernetes.io/default-container: manager Status: Running SeccompProfile: RuntimeDefault IP: 10.244.0.16 IPs: IP: 10.244.0.16 Controlled By: ReplicaSet/e2e-cevo-controller-manager-cbd6c5c75 Containers: manager: Container ID: containerd://d0a62deec179b7047c4eae3adb2bc99b4d4b076a497ee6ff94f68ca1da59b80d Image: e2e-test/controller-manager:cevo Image ID: sha256:486e2ac37060fdeffdd2921297e7368d243c67a2928d5f306fc9f8f5ef89e8e6 Port: 9443/TCP Host Port: 0/TCP Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 --metrics-cert-path=/tmp/k8s-metrics-server/metrics-certs --webhook-cert-path=/tmp/k8s-webhook-server/serving-certs State: Running Started: Mon, 02 Jun 2025 08:04:20 +0000 Ready: False Restart Count: 0 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /tmp/k8s-metrics-server/metrics-certs from metrics-certs (ro) /tmp/k8s-webhook-server/serving-certs from webhook-certs (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l7r5x (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready False ContainersReady False PodScheduled True Volumes: metrics-certs: Type: Secret (a volume populated by a Secret) SecretName: metrics-server-cert Optional: false webhook-certs: Type: Secret (a volume populated by a Secret) SecretName: webhook-server-cert Optional: false kube-api-access-l7r5x: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt Optional: false DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 1s default-scheduler Successfully assigned e2e-cevo-system/e2e-cevo-controller-manager-cbd6c5c75-h8wwf to kind-control-plane Normal Pulled 1s kubelet Container image "e2e-test/controller-manager:cevo" already present on machine Normal Created 1s kubelet Created container: manager Normal Started 0s kubelet Started container manager Name: e2e-cevo-controller-manager-metrics-service Namespace: e2e-cevo-system Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-cevo control-plane=controller-manager Annotations: Selector: app.kubernetes.io/name=e2e-cevo,control-plane=controller-manager Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.52.199 IPs: 10.96.52.199 Port: https 8443/TCP TargetPort: 8443/TCP Endpoints: Session Affinity: None Internal Traffic Policy: Cluster Events: Name: e2e-cevo-webhook-service Namespace: e2e-cevo-system Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-cevo Annotations: Selector: app.kubernetes.io/name=e2e-cevo,control-plane=controller-manager Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.26.214 IPs: 10.96.26.214 Port: 443/TCP TargetPort: 9443/TCP Endpoints: Session Affinity: None Internal Traffic Policy: Cluster Events: Name: e2e-cevo-controller-manager Namespace: e2e-cevo-system CreationTimestamp: Mon, 02 Jun 2025 08:04:19 +0000 Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-cevo control-plane=controller-manager Annotations: deployment.kubernetes.io/revision: 1 Selector: app.kubernetes.io/name=e2e-cevo,control-plane=controller-manager Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app.kubernetes.io/name=e2e-cevo control-plane=controller-manager Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-cevo-controller-manager Containers: manager: Image: e2e-test/controller-manager:cevo Port: 9443/TCP Host Port: 0/TCP Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 --metrics-cert-path=/tmp/k8s-metrics-server/metrics-certs --webhook-cert-path=/tmp/k8s-webhook-server/serving-certs Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /tmp/k8s-metrics-server/metrics-certs from metrics-certs (ro) /tmp/k8s-webhook-server/serving-certs from webhook-certs (ro) Volumes: metrics-certs: Type: Secret (a volume populated by a Secret) SecretName: metrics-server-cert Optional: false webhook-certs: Type: Secret (a volume populated by a Secret) SecretName: webhook-server-cert Optional: false Node-Selectors: Tolerations: Conditions: Type Status Reason ---- ------ ------ Available False MinimumReplicasUnavailable Progressing True ReplicaSetUpdated OldReplicaSets: NewReplicaSet: e2e-cevo-controller-manager-cbd6c5c75 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 1s deployment-controller Scaled up replica set e2e-cevo-controller-manager-cbd6c5c75 from 0 to 1 Name: e2e-cevo-controller-manager-cbd6c5c75 Namespace: e2e-cevo-system Selector: app.kubernetes.io/name=e2e-cevo,control-plane=controller-manager,pod-template-hash=cbd6c5c75 Labels: app.kubernetes.io/name=e2e-cevo control-plane=controller-manager pod-template-hash=cbd6c5c75 Annotations: deployment.kubernetes.io/desired-replicas: 1 deployment.kubernetes.io/max-replicas: 2 deployment.kubernetes.io/revision: 1 Controlled By: Deployment/e2e-cevo-controller-manager Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app.kubernetes.io/name=e2e-cevo control-plane=controller-manager pod-template-hash=cbd6c5c75 Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-cevo-controller-manager Containers: manager: Image: e2e-test/controller-manager:cevo Port: 9443/TCP Host Port: 0/TCP Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 --metrics-cert-path=/tmp/k8s-metrics-server/metrics-certs --webhook-cert-path=/tmp/k8s-webhook-server/serving-certs Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /tmp/k8s-metrics-server/metrics-certs from metrics-certs (ro) /tmp/k8s-webhook-server/serving-certs from webhook-certs (ro) Volumes: metrics-certs: Type: Secret (a volume populated by a Secret) SecretName: metrics-server-cert Optional: false webhook-certs: Type: Secret (a volume populated by a Secret) SecretName: webhook-server-cert Optional: false Node-Selectors: Tolerations: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 1s replicaset-controller Created pod: e2e-cevo-controller-manager-cbd6c5c75-h8wwf STEP: Checking if all flags are applied to the manager pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:183 @ 06/02/25 08:04:20.949 running: kubectl -n e2e-cevo-system get pod e2e-cevo-controller-manager-cbd6c5c75-h8wwf -o jsonpath={.spec.containers[0].args} STEP: validating that the Prometheus manager has provisioned the Service - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:195 @ 06/02/25 08:04:21.038 running: kubectl get Service prometheus-operator STEP: validating that the ServiceMonitor for Prometheus is applied in the namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:203 @ 06/02/25 08:04:21.123 running: kubectl -n e2e-cevo-system get ServiceMonitor STEP: validating that cert-manager has provisioned the certificate Secret - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:247 @ 06/02/25 08:04:21.212 running: kubectl -n e2e-cevo-system get secrets webhook-server-cert STEP: validating that the mutating|validating webhooks have the CA injected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:260 @ 06/02/25 08:04:21.299 running: kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io e2e-cevo-mutating-webhook-configuration -o go-template={{ range .webhooks }}{{ .clientConfig.caBundle }}{{ end }} running: kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io e2e-cevo-validating-webhook-configuration -o go-template={{ range .webhooks }}{{ .clientConfig.caBundle }}{{ end }} STEP: validating that the CA injection is applied for CRD conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:284 @ 06/02/25 08:04:21.498 running: kubectl get customresourcedefinition.apiextensions.k8s.io -o jsonpath={.items[?(@.spec.names.kind=='ConversionTest')].spec.conversion.webhook.clientConfig.caBundle} STEP: creating an instance of the CR - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:306 @ 06/02/25 08:04:22.416 running: kubectl -n e2e-cevo-system apply -f config/samples/barcevo_v1alpha1_foocevo.yaml running: kubectl -n e2e-cevo-system apply -f config/samples/barcevo_v1alpha1_foocevo.yaml running: kubectl -n e2e-cevo-system apply -f config/samples/barcevo_v1alpha1_foocevo.yaml running: kubectl -n e2e-cevo-system apply -f config/samples/barcevo_v1alpha1_foocevo.yaml running: kubectl -n e2e-cevo-system apply -f config/samples/barcevo_v1alpha1_foocevo.yaml running: kubectl -n e2e-cevo-system apply -f config/samples/barcevo_v1alpha1_foocevo.yaml running: kubectl -n e2e-cevo-system apply -f config/samples/barcevo_v1alpha1_foocevo.yaml running: kubectl -n e2e-cevo-system apply -f config/samples/barcevo_v1alpha1_foocevo.yaml running: kubectl -n e2e-cevo-system apply -f config/samples/barcevo_v1alpha1_foocevo.yaml STEP: checking the metrics values to validate that the created resource object gets reconciled - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:330 @ 06/02/25 08:04:31.292 running: kubectl get clusterrolebinding metrics-cevo running: kubectl create clusterrolebinding metrics-cevo --clusterrole=e2e-cevo-metrics-reader --serviceaccount=e2e-cevo-system:e2e-cevo-controller-manager running: kubectl create --raw /api/v1/namespaces/e2e-cevo-system/serviceaccounts/e2e-cevo-controller-manager/token -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-cevo/e2e-cevo-controller-manager-token-request STEP: validating that the controller-manager service is available - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:491 @ 06/02/25 08:04:31.539 running: kubectl -n e2e-cevo-system get service e2e-cevo-controller-manager-metrics-service STEP: ensuring the service endpoint is ready - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:498 @ 06/02/25 08:04:31.625 running: kubectl -n e2e-cevo-system get endpoints e2e-cevo-controller-manager-metrics-service -o jsonpath={.subsets[*].addresses[*].ip} STEP: creating a curl pod to access the metrics endpoint - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:512 @ 06/02/25 08:04:31.708 running: kubectl -n e2e-cevo-system run curl --restart=Never --namespace e2e-cevo-system --image=curlimages/curl:latest --overrides { "spec": { "containers": [{ "name": "curl", "image": "curlimages/curl:latest", "command": ["/bin/sh", "-c"], "args": ["curl -v -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1JN1BlSWx0ZEttUFJLRURHWnlpN3Fhamh6YkdDMWJHYUIxU1NaUFExbVkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ4ODU1MDcxLCJpYXQiOjE3NDg4NTE0NzEsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiZGI2ZDE2MjUtYjlkOC00MTY5LTkzODEtOTk2ZGQ5YWMxYmRjIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJlMmUtY2V2by1zeXN0ZW0iLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZTJlLWNldm8tY29udHJvbGxlci1tYW5hZ2VyIiwidWlkIjoiNmVmYzAwOTUtZDcwZC00YjU3LTgyZjEtMjI5ZTFkNTc2MmNkIn19LCJuYmYiOjE3NDg4NTE0NzEsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDplMmUtY2V2by1zeXN0ZW06ZTJlLWNldm8tY29udHJvbGxlci1tYW5hZ2VyIn0.grGZ-BZYn_ZJaJAS4ZV_v6KvlbJF2fyf1G-Um0exdyzmbdscq2tRg1ti9GhGu2KpXohNgyVkgQe-3mXyTRlP_1fQqXBbm-pWz-6hW3QdFzo-bKSG4HN5Njq93t-JBH-ZMlTZfLENfR5hg_J-VPsjXf6hyeBK0Ii5rIqyXh4RJWdT0q4M_wnO6-dgxWKcIeQxmeqagixxuguKZ_XAMIG0MN3gcGRLwOQZbZTwxIzmFP5Xy33nJXN42L_7sh4R3kNncGRygWEF08Rt_bmqCXvg0K4sXmmhb4m2X60As_b-IivQAXaPHvIZMQX7CJMH8cMCAK9Uq40RWWjbcMZ-n9-3ug' https://e2e-cevo-controller-manager-metrics-service.e2e-cevo-system.svc.cluster.local:8443/metrics"], "securityContext": { "allowPrivilegeEscalation": false, "capabilities": { "drop": ["ALL"] }, "runAsNonRoot": true, "runAsUser": 1000, "seccompProfile": { "type": "RuntimeDefault" } } }], "serviceAccountName": "e2e-cevo-controller-manager" } } STEP: validating that the curl pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:517 @ 06/02/25 08:04:31.797 running: kubectl -n e2e-cevo-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-cevo-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-cevo-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-cevo-system get pods curl -o jsonpath={.status.phase} STEP: validating that the metrics endpoint is serving as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:528 @ 06/02/25 08:04:35.14 running: kubectl -n e2e-cevo-system logs curl STEP: cleaning up the curl pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:611 @ 06/02/25 08:04:35.27 running: kubectl -n e2e-cevo-system delete pods/curl STEP: validating that mutating and validating webhooks are working fine - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:344 @ 06/02/25 08:04:35.36 running: kubectl -n e2e-cevo-system get -f config/samples/barcevo_v1alpha1_foocevo.yaml -o go-template={{ .spec.count }} STEP: creating a namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:356 @ 06/02/25 08:04:35.442 running: kubectl create namespace test-webhooks STEP: applying the CR in the created namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:361 @ 06/02/25 08:04:35.52 running: kubectl apply -n test-webhooks -f config/samples/barcevo_v1alpha1_foocevo.yaml STEP: validating that mutating webhooks are working fine outside of the manager's namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:369 @ 06/02/25 08:04:35.618 running: kubectl get -n test-webhooks -f config/samples/barcevo_v1alpha1_foocevo.yaml -o go-template={{ .spec.count }} STEP: removing the namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:382 @ 06/02/25 08:04:35.701 running: kubectl delete namespace test-webhooks STEP: validating the conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:386 @ 06/02/25 08:04:41.128 STEP: modifying the ConversionTest CR sample to set `size` for conversion testing - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:389 @ 06/02/25 08:04:41.128 STEP: applying the modified ConversionTest CR in v1 for conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:399 @ 06/02/25 08:04:41.129 running: kubectl -n e2e-cevo-system apply -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-cevo/config/samples/barcevo_v1_conversiontest.yaml STEP: waiting for the ConversionTest CR to appear - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:403 @ 06/02/25 08:04:41.225 running: kubectl -n e2e-cevo-system get conversiontest conversiontest-sample STEP: validating that the converted resource in v2 has replicas == 3 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:409 @ 06/02/25 08:04:41.312 running: kubectl -n e2e-cevo-system get conversiontest conversiontest-sample -o jsonpath={.spec.replicas} STEP: validating conversion metrics to confirm conversion operations - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:423 @ 06/02/25 08:04:41.398 running: kubectl get clusterrolebinding metrics-cevo running: kubectl create --raw /api/v1/namespaces/e2e-cevo-system/serviceaccounts/e2e-cevo-controller-manager/token -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-cevo/e2e-cevo-controller-manager-token-request STEP: validating that the controller-manager service is available - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:491 @ 06/02/25 08:04:41.564 running: kubectl -n e2e-cevo-system get service e2e-cevo-controller-manager-metrics-service STEP: ensuring the service endpoint is ready - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:498 @ 06/02/25 08:04:41.651 running: kubectl -n e2e-cevo-system get endpoints e2e-cevo-controller-manager-metrics-service -o jsonpath={.subsets[*].addresses[*].ip} STEP: creating a curl pod to access the metrics endpoint - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:512 @ 06/02/25 08:04:41.735 running: kubectl -n e2e-cevo-system run curl --restart=Never --namespace e2e-cevo-system --image=curlimages/curl:latest --overrides { "spec": { "containers": [{ "name": "curl", "image": "curlimages/curl:latest", "command": ["/bin/sh", "-c"], "args": ["curl -v -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1JN1BlSWx0ZEttUFJLRURHWnlpN3Fhamh6YkdDMWJHYUIxU1NaUFExbVkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ4ODU1MDgxLCJpYXQiOjE3NDg4NTE0ODEsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiNjgxNTAyNzEtNGVlZS00OTRhLWIwOGQtYmM1OGVjNDZiZmQ0Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJlMmUtY2V2by1zeXN0ZW0iLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZTJlLWNldm8tY29udHJvbGxlci1tYW5hZ2VyIiwidWlkIjoiNmVmYzAwOTUtZDcwZC00YjU3LTgyZjEtMjI5ZTFkNTc2MmNkIn19LCJuYmYiOjE3NDg4NTE0ODEsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDplMmUtY2V2by1zeXN0ZW06ZTJlLWNldm8tY29udHJvbGxlci1tYW5hZ2VyIn0.Xuw8wmZSrWGe1PDe1ouTybeqgYgD7Kl_HECKj1pA9zZgDSr-IdEFKWN2KA5C7vVcw7msixFsLNnd83_Xn2MIbzxFjEd7_7Y5-On8PW0wQf49CGtaLxMPh9L0w43YNxIq4wOXJkGUa2cNjCFttKRWlUbNd_s8_YVnAW4uPQGPWxO7TjByFiM3Lkr3cM4ffuvOu09A5uLrs8sUnlVOEt9w_m1_sdjccBo8fAFpvG2MOIRoINDO7IFnxntfkIpvKcr-fkJtLmQWqdk5dNggnZUpmyX499nLITkxyaA5vDDwRMRvIC5b_35PK0Y2P5P6UIYspZge7zOxwAiH4LHjP7ysRw' https://e2e-cevo-controller-manager-metrics-service.e2e-cevo-system.svc.cluster.local:8443/metrics"], "securityContext": { "allowPrivilegeEscalation": false, "capabilities": { "drop": ["ALL"] }, "runAsNonRoot": true, "runAsUser": 1000, "seccompProfile": { "type": "RuntimeDefault" } } }], "serviceAccountName": "e2e-cevo-controller-manager" } } STEP: validating that the curl pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:517 @ 06/02/25 08:04:41.823 running: kubectl -n e2e-cevo-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-cevo-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-cevo-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-cevo-system get pods curl -o jsonpath={.status.phase} STEP: validating that the metrics endpoint is serving as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:528 @ 06/02/25 08:04:45.163 running: kubectl -n e2e-cevo-system logs curl STEP: cleaning up the curl pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:611 @ 06/02/25 08:04:45.287 running: kubectl -n e2e-cevo-system delete pods/curl < Exit [It] should generate a runnable project with the Installer - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:73 @ 06/02/25 08:04:45.376 (1m45.974s) > Enter [AfterEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:59 @ 06/02/25 08:04:45.377 STEP: By removing restricted namespace label - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:60 @ 06/02/25 08:04:45.377 running: kubectl label ns e2e-cevo-system pod-security.kubernetes.io/enforce- STEP: clean up API objects created during the test - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:63 @ 06/02/25 08:04:45.466 running: make undeploy STEP: removing controller image and working dir - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:66 @ 06/02/25 08:04:54.204 running: docker rmi -f e2e-test/controller-manager:cevo < Exit [AfterEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:59 @ 06/02/25 08:04:54.269 (8.893s) • [114.960 seconds] ------------------------------ kubebuilder /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:48 plugin go/v4 /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:49 should generate a runnable project using webhooks and installed with the HelmChart /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:77 > Enter [BeforeEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:52 @ 06/02/25 08:04:54.269 running: kubectl version -o json cleaning up tools preparing testing directory: /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-mtix < Exit [BeforeEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:52 @ 06/02/25 08:04:54.349 (79ms) > Enter [It] should generate a runnable project using webhooks and installed with the HelmChart - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:77 @ 06/02/25 08:04:54.349 STEP: initializing a project - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:232 @ 06/02/25 08:04:54.349 running: kubebuilder init --plugins go/v4 --project-version 3 --domain example.commtix STEP: creating API definition - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:209 @ 06/02/25 08:04:55.191 running: kubebuilder create api --group barmtix --version v1alpha1 --kind Foomtix --namespaced --resource --controller --make=false STEP: implementing the API - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:221 @ 06/02/25 08:04:55.364 STEP: scaffolding mutating and validating webhooks - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:36 @ 06/02/25 08:04:55.364 running: kubebuilder create webhook --group barmtix --version v1alpha1 --kind Foomtix --defaulting --programmatic-validation --make=false STEP: implementing the mutating and validating webhooks - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:47 @ 06/02/25 08:04:56.069 STEP: scaffolding conversion webhooks for testing ConversionTest v1 to v2 conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:380 @ 06/02/25 08:04:56.069 running: kubebuilder create api --group barmtix --version v1 --kind ConversionTest --controller=true --resource=true --make=false running: kubebuilder create api --group barmtix --version v2 --kind ConversionTest --controller=false --resource=true --make=false STEP: setting up the conversion webhook for v1 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:405 @ 06/02/25 08:04:56.437 running: kubebuilder create webhook --group barmtix --version v1 --kind ConversionTest --conversion --spoke v2 --make=false STEP: implementing the size spec in v1 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:417 @ 06/02/25 08:04:56.625 STEP: implementing the replicas spec in v2 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:425 @ 06/02/25 08:04:56.625 STEP: installing Helm - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:79 @ 06/02/25 08:04:56.629 running: curl -fsSL -o /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-mtix/get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 running: /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-mtix/get_helm.sh running: helm version STEP: creating manager namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:114 @ 06/02/25 08:04:58.484 running: kubectl create ns e2e-mtix-system STEP: labeling the namespace to enforce the restricted security policy - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:118 @ 06/02/25 08:04:58.561 running: kubectl label --overwrite ns e2e-mtix-system pod-security.kubernetes.io/enforce=restricted STEP: updating the go.mod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:122 @ 06/02/25 08:04:58.649 running: go mod tidy STEP: run make all - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:126 @ 06/02/25 08:04:58.811 running: make all STEP: building the controller image - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:130 @ 06/02/25 08:05:10.229 running: make docker-build IMG=e2e-test/controller-manager:mtix STEP: loading the controller docker image into the kind cluster - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:134 @ 06/02/25 08:06:15.574 running: kind load docker-image e2e-test/controller-manager:mtix --name kind STEP: building the helm-chart - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:156 @ 06/02/25 08:06:18.485 running: kubebuilder edit --plugins=helm/v1-alpha STEP: updating values with image name - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:160 @ 06/02/25 08:06:18.511 STEP: updating values to enable prometheus - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:167 @ 06/02/25 08:06:18.512 STEP: updating values to set crd.keep false - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:171 @ 06/02/25 08:06:18.512 STEP: install with Helm release - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:175 @ 06/02/25 08:06:18.512 running: helm install release-mtix dist/chart --namespace e2e-mtix-system STEP: Checking controllerManager and getting the name of the Pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:180 @ 06/02/25 08:06:19.438 STEP: validating that the controller-manager pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:433 @ 06/02/25 08:06:19.438 running: kubectl -n e2e-mtix-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-mtix-system get pods e2e-mtix-controller-manager-86bd678c4d-tn58q -o jsonpath={.status.phase} running: kubectl -n e2e-mtix-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-mtix-system get pods e2e-mtix-controller-manager-86bd678c4d-tn58q -o jsonpath={.status.phase} running: kubectl -n e2e-mtix-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-mtix-system get pods e2e-mtix-controller-manager-86bd678c4d-tn58q -o jsonpath={.status.phase} running: kubectl -n e2e-mtix-system describe all Name: e2e-mtix-controller-manager-86bd678c4d-tn58q Namespace: e2e-mtix-system Priority: 0 Service Account: e2e-mtix-controller-manager Node: kind-control-plane/172.18.0.2 Start Time: Mon, 02 Jun 2025 08:06:19 +0000 Labels: app.kubernetes.io/instance=release-mtix app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=e2e-mtix app.kubernetes.io/version=0.1.0 control-plane=controller-manager helm.sh/chart=0.1.0 pod-template-hash=86bd678c4d Annotations: kubectl.kubernetes.io/default-container: manager Status: Running SeccompProfile: RuntimeDefault IP: 10.244.0.19 IPs: IP: 10.244.0.19 Controlled By: ReplicaSet/e2e-mtix-controller-manager-86bd678c4d Containers: manager: Container ID: containerd://e87bca5a28a87b824877e51dc8e1374ba68cb36f8be49975dac755d19c4d4507 Image: e2e-test/controller-manager:mtix Image ID: sha256:497a90066cb8fe229a90b5b7bc76c1c0370940afb89b7dc817f81e6edb535811 Port: 9443/TCP Host Port: 0/TCP Command: /manager Args: --leader-elect --metrics-bind-address=:8443 --health-probe-bind-address=:8081 State: Running Started: Mon, 02 Jun 2025 08:06:20 +0000 Ready: False Restart Count: 0 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /tmp/k8s-metrics-server/metrics-certs from metrics-certs (ro) /tmp/k8s-webhook-server/serving-certs from webhook-cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n9gtk (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready False ContainersReady False PodScheduled True Volumes: webhook-cert: Type: Secret (a volume populated by a Secret) SecretName: webhook-server-cert Optional: false metrics-certs: Type: Secret (a volume populated by a Secret) SecretName: metrics-server-cert Optional: false kube-api-access-n9gtk: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt Optional: false DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3s default-scheduler Successfully assigned e2e-mtix-system/e2e-mtix-controller-manager-86bd678c4d-tn58q to kind-control-plane Warning FailedMount 3s kubelet MountVolume.SetUp failed for volume "webhook-cert" : secret "webhook-server-cert" not found Warning FailedMount 3s kubelet MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found Normal Pulled 2s kubelet Container image "e2e-test/controller-manager:mtix" already present on machine Normal Created 2s kubelet Created container: manager Normal Started 2s kubelet Started container manager Name: e2e-mtix-controller-manager-metrics-service Namespace: e2e-mtix-system Labels: app.kubernetes.io/instance=release-mtix app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=e2e-mtix app.kubernetes.io/version=0.1.0 control-plane=controller-manager helm.sh/chart=0.1.0 Annotations: meta.helm.sh/release-name: release-mtix meta.helm.sh/release-namespace: e2e-mtix-system Selector: control-plane=controller-manager Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.11.178 IPs: 10.96.11.178 Port: https 8443/TCP TargetPort: 8443/TCP Endpoints: Session Affinity: None Internal Traffic Policy: Cluster Events: Name: e2e-mtix-webhook-service Namespace: e2e-mtix-system Labels: app.kubernetes.io/instance=release-mtix app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=e2e-mtix app.kubernetes.io/version=0.1.0 helm.sh/chart=0.1.0 Annotations: meta.helm.sh/release-name: release-mtix meta.helm.sh/release-namespace: e2e-mtix-system Selector: control-plane=controller-manager Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.204.121 IPs: 10.96.204.121 Port: 443/TCP TargetPort: 9443/TCP Endpoints: Session Affinity: None Internal Traffic Policy: Cluster Events: Name: e2e-mtix-controller-manager Namespace: e2e-mtix-system CreationTimestamp: Mon, 02 Jun 2025 08:06:19 +0000 Labels: app.kubernetes.io/instance=release-mtix app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=e2e-mtix app.kubernetes.io/version=0.1.0 control-plane=controller-manager helm.sh/chart=0.1.0 Annotations: deployment.kubernetes.io/revision: 1 meta.helm.sh/release-name: release-mtix meta.helm.sh/release-namespace: e2e-mtix-system Selector: app.kubernetes.io/instance=release-mtix,app.kubernetes.io/name=e2e-mtix,control-plane=controller-manager Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app.kubernetes.io/instance=release-mtix app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=e2e-mtix app.kubernetes.io/version=0.1.0 control-plane=controller-manager helm.sh/chart=0.1.0 Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-mtix-controller-manager Containers: manager: Image: e2e-test/controller-manager:mtix Port: 9443/TCP Host Port: 0/TCP Command: /manager Args: --leader-elect --metrics-bind-address=:8443 --health-probe-bind-address=:8081 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /tmp/k8s-metrics-server/metrics-certs from metrics-certs (ro) /tmp/k8s-webhook-server/serving-certs from webhook-cert (ro) Volumes: webhook-cert: Type: Secret (a volume populated by a Secret) SecretName: webhook-server-cert Optional: false metrics-certs: Type: Secret (a volume populated by a Secret) SecretName: metrics-server-cert Optional: false Node-Selectors: Tolerations: Conditions: Type Status Reason ---- ------ ------ Available False MinimumReplicasUnavailable Progressing True ReplicaSetUpdated OldReplicaSets: NewReplicaSet: e2e-mtix-controller-manager-86bd678c4d (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 3s deployment-controller Scaled up replica set e2e-mtix-controller-manager-86bd678c4d from 0 to 1 Name: e2e-mtix-controller-manager-86bd678c4d Namespace: e2e-mtix-system Selector: app.kubernetes.io/instance=release-mtix,app.kubernetes.io/name=e2e-mtix,control-plane=controller-manager,pod-template-hash=86bd678c4d Labels: app.kubernetes.io/instance=release-mtix app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=e2e-mtix app.kubernetes.io/version=0.1.0 control-plane=controller-manager helm.sh/chart=0.1.0 pod-template-hash=86bd678c4d Annotations: deployment.kubernetes.io/desired-replicas: 1 deployment.kubernetes.io/max-replicas: 2 deployment.kubernetes.io/revision: 1 meta.helm.sh/release-name: release-mtix meta.helm.sh/release-namespace: e2e-mtix-system Controlled By: Deployment/e2e-mtix-controller-manager Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app.kubernetes.io/instance=release-mtix app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=e2e-mtix app.kubernetes.io/version=0.1.0 control-plane=controller-manager helm.sh/chart=0.1.0 pod-template-hash=86bd678c4d Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-mtix-controller-manager Containers: manager: Image: e2e-test/controller-manager:mtix Port: 9443/TCP Host Port: 0/TCP Command: /manager Args: --leader-elect --metrics-bind-address=:8443 --health-probe-bind-address=:8081 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /tmp/k8s-metrics-server/metrics-certs from metrics-certs (ro) /tmp/k8s-webhook-server/serving-certs from webhook-cert (ro) Volumes: webhook-cert: Type: Secret (a volume populated by a Secret) SecretName: webhook-server-cert Optional: false metrics-certs: Type: Secret (a volume populated by a Secret) SecretName: metrics-server-cert Optional: false Node-Selectors: Tolerations: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 3s replicaset-controller Created pod: e2e-mtix-controller-manager-86bd678c4d-tn58q STEP: Checking if all flags are applied to the manager pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:183 @ 06/02/25 08:06:22.153 running: kubectl -n e2e-mtix-system get pod e2e-mtix-controller-manager-86bd678c4d-tn58q -o jsonpath={.spec.containers[0].args} STEP: validating that the Prometheus manager has provisioned the Service - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:195 @ 06/02/25 08:06:22.242 running: kubectl get Service prometheus-operator STEP: validating that the ServiceMonitor for Prometheus is applied in the namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:203 @ 06/02/25 08:06:22.328 running: kubectl -n e2e-mtix-system get ServiceMonitor STEP: validating that cert-manager has provisioned the certificate Secret - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:247 @ 06/02/25 08:06:22.415 running: kubectl -n e2e-mtix-system get secrets webhook-server-cert STEP: validating that the mutating|validating webhooks have the CA injected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:260 @ 06/02/25 08:06:22.5 running: kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io e2e-mtix-mutating-webhook-configuration -o go-template={{ range .webhooks }}{{ .clientConfig.caBundle }}{{ end }} running: kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io e2e-mtix-validating-webhook-configuration -o go-template={{ range .webhooks }}{{ .clientConfig.caBundle }}{{ end }} STEP: validating that the CA injection is applied for CRD conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:284 @ 06/02/25 08:06:22.701 running: kubectl get customresourcedefinition.apiextensions.k8s.io -o jsonpath={.items[?(@.spec.names.kind=='ConversionTest')].spec.conversion.webhook.clientConfig.caBundle} STEP: creating an instance of the CR - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:306 @ 06/02/25 08:06:23.629 running: kubectl -n e2e-mtix-system apply -f config/samples/barmtix_v1alpha1_foomtix.yaml running: kubectl -n e2e-mtix-system apply -f config/samples/barmtix_v1alpha1_foomtix.yaml running: kubectl -n e2e-mtix-system apply -f config/samples/barmtix_v1alpha1_foomtix.yaml running: kubectl -n e2e-mtix-system apply -f config/samples/barmtix_v1alpha1_foomtix.yaml running: kubectl -n e2e-mtix-system apply -f config/samples/barmtix_v1alpha1_foomtix.yaml running: kubectl -n e2e-mtix-system apply -f config/samples/barmtix_v1alpha1_foomtix.yaml running: kubectl -n e2e-mtix-system apply -f config/samples/barmtix_v1alpha1_foomtix.yaml running: kubectl -n e2e-mtix-system apply -f config/samples/barmtix_v1alpha1_foomtix.yaml running: kubectl -n e2e-mtix-system apply -f config/samples/barmtix_v1alpha1_foomtix.yaml running: kubectl -n e2e-mtix-system apply -f config/samples/barmtix_v1alpha1_foomtix.yaml STEP: checking the metrics values to validate that the created resource object gets reconciled - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:330 @ 06/02/25 08:06:33.597 running: kubectl get clusterrolebinding metrics-mtix running: kubectl create clusterrolebinding metrics-mtix --clusterrole=e2e-mtix-metrics-reader --serviceaccount=e2e-mtix-system:e2e-mtix-controller-manager running: kubectl create --raw /api/v1/namespaces/e2e-mtix-system/serviceaccounts/e2e-mtix-controller-manager/token -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-mtix/e2e-mtix-controller-manager-token-request STEP: validating that the controller-manager service is available - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:491 @ 06/02/25 08:06:33.849 running: kubectl -n e2e-mtix-system get service e2e-mtix-controller-manager-metrics-service STEP: ensuring the service endpoint is ready - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:498 @ 06/02/25 08:06:33.937 running: kubectl -n e2e-mtix-system get endpoints e2e-mtix-controller-manager-metrics-service -o jsonpath={.subsets[*].addresses[*].ip} STEP: creating a curl pod to access the metrics endpoint - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:512 @ 06/02/25 08:06:34.025 running: kubectl -n e2e-mtix-system run curl --restart=Never --namespace e2e-mtix-system --image=curlimages/curl:latest --overrides { "spec": { "containers": [{ "name": "curl", "image": "curlimages/curl:latest", "command": ["/bin/sh", "-c"], "args": ["curl -v -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1JN1BlSWx0ZEttUFJLRURHWnlpN3Fhamh6YkdDMWJHYUIxU1NaUFExbVkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ4ODU1MTkzLCJpYXQiOjE3NDg4NTE1OTMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiNzIxMzBiMmUtYjg4Ni00ZWU2LTgxMmEtNTdjNDE2M2VkYzM3Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJlMmUtbXRpeC1zeXN0ZW0iLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZTJlLW10aXgtY29udHJvbGxlci1tYW5hZ2VyIiwidWlkIjoiNjkzODgxN2EtOTU2ZS00NDUzLWI5MDEtZWE3MzJiNDZiNTQxIn19LCJuYmYiOjE3NDg4NTE1OTMsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDplMmUtbXRpeC1zeXN0ZW06ZTJlLW10aXgtY29udHJvbGxlci1tYW5hZ2VyIn0.k3gUaBrzT7iwGDWBwa7MLe8r-GjOkEWSRsiJbNI_mPccVhtQhxD3GtENBb_cSTrHdkmOlgtn03AQ_4uZKnMlvEcFRX5cq6ztdYj6-x7gantxrSZlOpV05Nr_dWyrULMQ3boWpxQtdEnY3cBjteLxotCaOtBv0un8yRMx214Qe3JFrnW8Yef28yclGtMWccW3AtzMXtqW1kTf5tfYKOmadcyhE-1MLDOfcdDeXDxsfb9SrK6DAprmlyb57ZoH7F7b5dZTbQujIno4lJYUwnJx4KcVFdk0PVT9qQD_Di-g_A4hVzgkcmY33NrfSdFmg73f8luPNxzSrhF9091IPHTdcw' https://e2e-mtix-controller-manager-metrics-service.e2e-mtix-system.svc.cluster.local:8443/metrics"], "securityContext": { "allowPrivilegeEscalation": false, "capabilities": { "drop": ["ALL"] }, "runAsNonRoot": true, "runAsUser": 1000, "seccompProfile": { "type": "RuntimeDefault" } } }], "serviceAccountName": "e2e-mtix-controller-manager" } } STEP: validating that the curl pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:517 @ 06/02/25 08:06:34.117 running: kubectl -n e2e-mtix-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-mtix-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-mtix-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-mtix-system get pods curl -o jsonpath={.status.phase} STEP: validating that the metrics endpoint is serving as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:528 @ 06/02/25 08:06:37.478 running: kubectl -n e2e-mtix-system logs curl STEP: cleaning up the curl pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:611 @ 06/02/25 08:06:37.608 running: kubectl -n e2e-mtix-system delete pods/curl STEP: validating that mutating and validating webhooks are working fine - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:344 @ 06/02/25 08:06:37.703 running: kubectl -n e2e-mtix-system get -f config/samples/barmtix_v1alpha1_foomtix.yaml -o go-template={{ .spec.count }} STEP: creating a namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:356 @ 06/02/25 08:06:37.786 running: kubectl create namespace test-webhooks STEP: applying the CR in the created namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:361 @ 06/02/25 08:06:37.864 running: kubectl apply -n test-webhooks -f config/samples/barmtix_v1alpha1_foomtix.yaml STEP: validating that mutating webhooks are working fine outside of the manager's namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:369 @ 06/02/25 08:06:37.962 running: kubectl get -n test-webhooks -f config/samples/barmtix_v1alpha1_foomtix.yaml -o go-template={{ .spec.count }} STEP: removing the namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:382 @ 06/02/25 08:06:38.047 running: kubectl delete namespace test-webhooks STEP: validating the conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:386 @ 06/02/25 08:06:43.413 STEP: modifying the ConversionTest CR sample to set `size` for conversion testing - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:389 @ 06/02/25 08:06:43.413 STEP: applying the modified ConversionTest CR in v1 for conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:399 @ 06/02/25 08:06:43.413 running: kubectl -n e2e-mtix-system apply -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-mtix/config/samples/barmtix_v1_conversiontest.yaml STEP: waiting for the ConversionTest CR to appear - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:403 @ 06/02/25 08:06:43.519 running: kubectl -n e2e-mtix-system get conversiontest conversiontest-sample STEP: validating that the converted resource in v2 has replicas == 3 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:409 @ 06/02/25 08:06:43.607 running: kubectl -n e2e-mtix-system get conversiontest conversiontest-sample -o jsonpath={.spec.replicas} STEP: validating conversion metrics to confirm conversion operations - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:423 @ 06/02/25 08:06:43.697 running: kubectl get clusterrolebinding metrics-mtix running: kubectl create --raw /api/v1/namespaces/e2e-mtix-system/serviceaccounts/e2e-mtix-controller-manager/token -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-mtix/e2e-mtix-controller-manager-token-request STEP: validating that the controller-manager service is available - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:491 @ 06/02/25 08:06:43.87 running: kubectl -n e2e-mtix-system get service e2e-mtix-controller-manager-metrics-service STEP: ensuring the service endpoint is ready - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:498 @ 06/02/25 08:06:43.955 running: kubectl -n e2e-mtix-system get endpoints e2e-mtix-controller-manager-metrics-service -o jsonpath={.subsets[*].addresses[*].ip} STEP: creating a curl pod to access the metrics endpoint - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:512 @ 06/02/25 08:06:44.04 running: kubectl -n e2e-mtix-system run curl --restart=Never --namespace e2e-mtix-system --image=curlimages/curl:latest --overrides { "spec": { "containers": [{ "name": "curl", "image": "curlimages/curl:latest", "command": ["/bin/sh", "-c"], "args": ["curl -v -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1JN1BlSWx0ZEttUFJLRURHWnlpN3Fhamh6YkdDMWJHYUIxU1NaUFExbVkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ4ODU1MjAzLCJpYXQiOjE3NDg4NTE2MDMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiY2I1N2I4MzAtYmVkNC00OTVjLWI1NjUtMzRiMTYwZDVmMWU0Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJlMmUtbXRpeC1zeXN0ZW0iLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZTJlLW10aXgtY29udHJvbGxlci1tYW5hZ2VyIiwidWlkIjoiNjkzODgxN2EtOTU2ZS00NDUzLWI5MDEtZWE3MzJiNDZiNTQxIn19LCJuYmYiOjE3NDg4NTE2MDMsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDplMmUtbXRpeC1zeXN0ZW06ZTJlLW10aXgtY29udHJvbGxlci1tYW5hZ2VyIn0.L5v9HDxb_HZPlHBVhGSNcDsIrZ32JCNaOGMxR2Ion_PpfBdIc3sEuww7WYWsgQGBuVTDnTJoH4ACZ1O0RkMcPlzpEV0fLuUmRHLaO3kY8Z8xJDEZ4sd5V5sSXDLfsaTLZ0sWJVGnRHmYgcr1vTlpnFPEYlOiYKlZ_g2gBDlifTckxsjuAjDRF34KATwkZd46j1waWhOuENnCjDyvs18Bzb5UopFqJRGJ_V9A7fGpq14fPPa1WmnG33nRiMMC1lSoLYktAi1fgPmizFqEUrsTrUeOliKnPkNflgCrrPakHDMfBWqKdMP6i8xd5es3t3jj20tSjEHbY7Cj9S4KlvhxPQ' https://e2e-mtix-controller-manager-metrics-service.e2e-mtix-system.svc.cluster.local:8443/metrics"], "securityContext": { "allowPrivilegeEscalation": false, "capabilities": { "drop": ["ALL"] }, "runAsNonRoot": true, "runAsUser": 1000, "seccompProfile": { "type": "RuntimeDefault" } } }], "serviceAccountName": "e2e-mtix-controller-manager" } } STEP: validating that the curl pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:517 @ 06/02/25 08:06:44.13 running: kubectl -n e2e-mtix-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-mtix-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-mtix-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-mtix-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-mtix-system get pods curl -o jsonpath={.status.phase} STEP: validating that the metrics endpoint is serving as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:528 @ 06/02/25 08:06:48.576 running: kubectl -n e2e-mtix-system logs curl STEP: cleaning up the curl pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:611 @ 06/02/25 08:06:48.702 running: kubectl -n e2e-mtix-system delete pods/curl STEP: uninstalling Helm Release - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:84 @ 06/02/25 08:06:48.794 running: helm uninstall release-mtix --namespace e2e-mtix-system running: kubectl wait namespace e2e-mtix-system --for=delete --timeout=2m time="2025-06-02T08:08:49Z" level=info msg="failed to wait for namespace deletion: \"kubectl wait namespace e2e-mtix-system --for=delete --timeout=2m\" failed with error \"error: timed out waiting for the condition on namespaces/e2e-mtix-system\\n\": exit status 1" < Exit [It] should generate a runnable project using webhooks and installed with the HelmChart - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:77 @ 06/02/25 08:08:49.159 (3m54.81s) > Enter [AfterEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:59 @ 06/02/25 08:08:49.159 STEP: By removing restricted namespace label - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:60 @ 06/02/25 08:08:49.159 running: kubectl label ns e2e-mtix-system pod-security.kubernetes.io/enforce- STEP: clean up API objects created during the test - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:63 @ 06/02/25 08:08:49.294 running: make undeploy STEP: removing controller image and working dir - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:66 @ 06/02/25 08:08:53.513 running: docker rmi -f e2e-test/controller-manager:mtix < Exit [AfterEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:59 @ 06/02/25 08:08:53.581 (4.422s) • [239.312 seconds] ------------------------------ kubebuilder /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:48 plugin go/v4 /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:49 should generate a runnable project without metrics exposed /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:87 > Enter [BeforeEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:52 @ 06/02/25 08:08:53.581 running: kubectl version -o json cleaning up tools preparing testing directory: /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-xdee < Exit [BeforeEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:52 @ 06/02/25 08:08:53.659 (77ms) > Enter [It] should generate a runnable project without metrics exposed - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:87 @ 06/02/25 08:08:53.659 STEP: initializing a project - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:232 @ 06/02/25 08:08:53.659 running: kubebuilder init --plugins go/v4 --project-version 3 --domain example.comxdee STEP: creating API definition - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:209 @ 06/02/25 08:08:54.193 running: kubebuilder create api --group barxdee --version v1alpha1 --kind Fooxdee --namespaced --resource --controller --make=false STEP: implementing the API - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:221 @ 06/02/25 08:08:54.404 STEP: scaffolding mutating and validating webhooks - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:83 @ 06/02/25 08:08:54.404 running: kubebuilder create webhook --group barxdee --version v1alpha1 --kind Fooxdee --defaulting --programmatic-validation --make=false STEP: implementing the mutating and validating webhooks - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:94 @ 06/02/25 08:08:55.147 STEP: scaffolding conversion webhooks for testing ConversionTest v1 to v2 conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:380 @ 06/02/25 08:08:55.147 running: kubebuilder create api --group barxdee --version v1 --kind ConversionTest --controller=true --resource=true --make=false running: kubebuilder create api --group barxdee --version v2 --kind ConversionTest --controller=false --resource=true --make=false STEP: setting up the conversion webhook for v1 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:405 @ 06/02/25 08:08:55.557 running: kubebuilder create webhook --group barxdee --version v1 --kind ConversionTest --conversion --spoke v2 --make=false STEP: implementing the size spec in v1 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:417 @ 06/02/25 08:08:55.742 STEP: implementing the replicas spec in v2 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:425 @ 06/02/25 08:08:55.743 STEP: creating manager namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:114 @ 06/02/25 08:08:55.745 running: kubectl create ns e2e-xdee-system STEP: labeling the namespace to enforce the restricted security policy - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:118 @ 06/02/25 08:08:55.824 running: kubectl label --overwrite ns e2e-xdee-system pod-security.kubernetes.io/enforce=restricted STEP: updating the go.mod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:122 @ 06/02/25 08:08:55.916 running: go mod tidy STEP: run make all - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:126 @ 06/02/25 08:08:56.081 running: make all STEP: building the controller image - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:130 @ 06/02/25 08:09:10.118 running: make docker-build IMG=e2e-test/controller-manager:xdee STEP: loading the controller docker image into the kind cluster - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:134 @ 06/02/25 08:10:12.248 running: kind load docker-image e2e-test/controller-manager:xdee --name kind STEP: deploying the controller-manager - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:139 @ 06/02/25 08:10:15.061 running: make deploy IMG=e2e-test/controller-manager:xdee STEP: Checking controllerManager and getting the name of the Pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:180 @ 06/02/25 08:10:20.83 STEP: validating that the controller-manager pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:433 @ 06/02/25 08:10:20.83 running: kubectl -n e2e-xdee-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-xdee-system get pods e2e-xdee-controller-manager-7d4db88db9-4nqpz -o jsonpath={.status.phase} running: kubectl -n e2e-xdee-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-xdee-system get pods e2e-xdee-controller-manager-7d4db88db9-4nqpz -o jsonpath={.status.phase} running: kubectl -n e2e-xdee-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-xdee-system get pods e2e-xdee-controller-manager-7d4db88db9-4nqpz -o jsonpath={.status.phase} running: kubectl -n e2e-xdee-system describe all Name: e2e-xdee-controller-manager-7d4db88db9-4nqpz Namespace: e2e-xdee-system Priority: 0 Service Account: e2e-xdee-controller-manager Node: kind-control-plane/172.18.0.2 Start Time: Mon, 02 Jun 2025 08:10:20 +0000 Labels: app.kubernetes.io/name=e2e-xdee control-plane=controller-manager pod-template-hash=7d4db88db9 Annotations: kubectl.kubernetes.io/default-container: manager Status: Running SeccompProfile: RuntimeDefault IP: 10.244.0.22 IPs: IP: 10.244.0.22 Controlled By: ReplicaSet/e2e-xdee-controller-manager-7d4db88db9 Containers: manager: Container ID: containerd://4b08ef425180351eacde3990675465e4ab2c2397b436e58acfba86c3830e56e9 Image: e2e-test/controller-manager:xdee Image ID: sha256:20b79ec259760d29983f3b0bb9d605552debb7b54d23b477d8f81bbc7cc3d557 Port: 9443/TCP Host Port: 0/TCP Command: /manager Args: --leader-elect --health-probe-bind-address=:8081 --webhook-cert-path=/tmp/k8s-webhook-server/serving-certs State: Running Started: Mon, 02 Jun 2025 08:10:22 +0000 Ready: False Restart Count: 0 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /tmp/k8s-webhook-server/serving-certs from webhook-certs (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6ktf2 (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready False ContainersReady False PodScheduled True Volumes: webhook-certs: Type: Secret (a volume populated by a Secret) SecretName: webhook-server-cert Optional: false kube-api-access-6ktf2: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt Optional: false DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3s default-scheduler Successfully assigned e2e-xdee-system/e2e-xdee-controller-manager-7d4db88db9-4nqpz to kind-control-plane Warning FailedMount 3s kubelet MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found Normal Pulled 2s kubelet Container image "e2e-test/controller-manager:xdee" already present on machine Normal Created 2s kubelet Created container: manager Normal Started 1s kubelet Started container manager Name: e2e-xdee-webhook-service Namespace: e2e-xdee-system Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-xdee Annotations: Selector: app.kubernetes.io/name=e2e-xdee,control-plane=controller-manager Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.119.118 IPs: 10.96.119.118 Port: 443/TCP TargetPort: 9443/TCP Endpoints: Session Affinity: None Internal Traffic Policy: Cluster Events: Name: e2e-xdee-controller-manager Namespace: e2e-xdee-system CreationTimestamp: Mon, 02 Jun 2025 08:10:20 +0000 Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-xdee control-plane=controller-manager Annotations: deployment.kubernetes.io/revision: 1 Selector: app.kubernetes.io/name=e2e-xdee,control-plane=controller-manager Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app.kubernetes.io/name=e2e-xdee control-plane=controller-manager Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-xdee-controller-manager Containers: manager: Image: e2e-test/controller-manager:xdee Port: 9443/TCP Host Port: 0/TCP Command: /manager Args: --leader-elect --health-probe-bind-address=:8081 --webhook-cert-path=/tmp/k8s-webhook-server/serving-certs Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /tmp/k8s-webhook-server/serving-certs from webhook-certs (ro) Volumes: webhook-certs: Type: Secret (a volume populated by a Secret) SecretName: webhook-server-cert Optional: false Node-Selectors: Tolerations: Conditions: Type Status Reason ---- ------ ------ Available False MinimumReplicasUnavailable Progressing True ReplicaSetUpdated OldReplicaSets: NewReplicaSet: e2e-xdee-controller-manager-7d4db88db9 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 3s deployment-controller Scaled up replica set e2e-xdee-controller-manager-7d4db88db9 from 0 to 1 Name: e2e-xdee-controller-manager-7d4db88db9 Namespace: e2e-xdee-system Selector: app.kubernetes.io/name=e2e-xdee,control-plane=controller-manager,pod-template-hash=7d4db88db9 Labels: app.kubernetes.io/name=e2e-xdee control-plane=controller-manager pod-template-hash=7d4db88db9 Annotations: deployment.kubernetes.io/desired-replicas: 1 deployment.kubernetes.io/max-replicas: 2 deployment.kubernetes.io/revision: 1 Controlled By: Deployment/e2e-xdee-controller-manager Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app.kubernetes.io/name=e2e-xdee control-plane=controller-manager pod-template-hash=7d4db88db9 Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-xdee-controller-manager Containers: manager: Image: e2e-test/controller-manager:xdee Port: 9443/TCP Host Port: 0/TCP Command: /manager Args: --leader-elect --health-probe-bind-address=:8081 --webhook-cert-path=/tmp/k8s-webhook-server/serving-certs Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /tmp/k8s-webhook-server/serving-certs from webhook-certs (ro) Volumes: webhook-certs: Type: Secret (a volume populated by a Secret) SecretName: webhook-server-cert Optional: false Node-Selectors: Tolerations: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 3s replicaset-controller Created pod: e2e-xdee-controller-manager-7d4db88db9-4nqpz STEP: Checking if all flags are applied to the manager pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:183 @ 06/02/25 08:10:23.577 running: kubectl -n e2e-xdee-system get pod e2e-xdee-controller-manager-7d4db88db9-4nqpz -o jsonpath={.spec.containers[0].args} STEP: validating that the Prometheus manager has provisioned the Service - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:195 @ 06/02/25 08:10:23.666 running: kubectl get Service prometheus-operator STEP: validating that the ServiceMonitor for Prometheus is applied in the namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:203 @ 06/02/25 08:10:23.75 running: kubectl -n e2e-xdee-system get ServiceMonitor STEP: validating that cert-manager has provisioned the certificate Secret - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:247 @ 06/02/25 08:10:23.836 running: kubectl -n e2e-xdee-system get secrets webhook-server-cert STEP: validating that the mutating|validating webhooks have the CA injected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:260 @ 06/02/25 08:10:23.924 running: kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io e2e-xdee-mutating-webhook-configuration -o go-template={{ range .webhooks }}{{ .clientConfig.caBundle }}{{ end }} running: kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io e2e-xdee-validating-webhook-configuration -o go-template={{ range .webhooks }}{{ .clientConfig.caBundle }}{{ end }} STEP: validating that the CA injection is applied for CRD conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:284 @ 06/02/25 08:10:24.125 running: kubectl get customresourcedefinition.apiextensions.k8s.io -o jsonpath={.items[?(@.spec.names.kind=='ConversionTest')].spec.conversion.webhook.clientConfig.caBundle} STEP: creating an instance of the CR - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:306 @ 06/02/25 08:10:25.032 running: kubectl -n e2e-xdee-system apply -f config/samples/barxdee_v1alpha1_fooxdee.yaml running: kubectl -n e2e-xdee-system apply -f config/samples/barxdee_v1alpha1_fooxdee.yaml running: kubectl -n e2e-xdee-system apply -f config/samples/barxdee_v1alpha1_fooxdee.yaml running: kubectl -n e2e-xdee-system apply -f config/samples/barxdee_v1alpha1_fooxdee.yaml running: kubectl -n e2e-xdee-system apply -f config/samples/barxdee_v1alpha1_fooxdee.yaml running: kubectl -n e2e-xdee-system apply -f config/samples/barxdee_v1alpha1_fooxdee.yaml running: kubectl -n e2e-xdee-system apply -f config/samples/barxdee_v1alpha1_fooxdee.yaml running: kubectl -n e2e-xdee-system apply -f config/samples/barxdee_v1alpha1_fooxdee.yaml running: kubectl -n e2e-xdee-system apply -f config/samples/barxdee_v1alpha1_fooxdee.yaml STEP: validating the metrics endpoint is not working as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:339 @ 06/02/25 08:10:33.91 running: kubectl create clusterrolebinding metrics-xdee --clusterrole=e2e-xdee-metrics-reader --serviceaccount=e2e-xdee-system:e2e-xdee-controller-manager running: kubectl create --raw /api/v1/namespaces/e2e-xdee-system/serviceaccounts/e2e-xdee-controller-manager/token -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-xdee/e2e-xdee-controller-manager-token-request STEP: creating a curl pod to access the metrics endpoint - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:550 @ 06/02/25 08:10:34.076 running: kubectl -n e2e-xdee-system run curl --restart=Never --namespace e2e-xdee-system --image=curlimages/curl:latest --overrides { "spec": { "containers": [{ "name": "curl", "image": "curlimages/curl:latest", "command": ["/bin/sh", "-c"], "args": ["curl -v -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1JN1BlSWx0ZEttUFJLRURHWnlpN3Fhamh6YkdDMWJHYUIxU1NaUFExbVkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ4ODU1NDM0LCJpYXQiOjE3NDg4NTE4MzQsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiNTg3ZWMzZTYtODM5YS00ZjRjLWFjOGItODZkZTEyM2ZhZGZhIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJlMmUteGRlZS1zeXN0ZW0iLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZTJlLXhkZWUtY29udHJvbGxlci1tYW5hZ2VyIiwidWlkIjoiMDIwNTY5YjItZDVjYy00MDE3LWFhNWEtODMwODZhMjBkMDMzIn19LCJuYmYiOjE3NDg4NTE4MzQsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDplMmUteGRlZS1zeXN0ZW06ZTJlLXhkZWUtY29udHJvbGxlci1tYW5hZ2VyIn0.oBO_Jgvk9BwvL5WnTmgGYxY8vJT4IH9y4ja0kkS_gikIOzkyI8TPRuuMhUr7epENkAVuHIEH7aaqtZuXM6Gn_4SSmr0WAjpQq5Im7LHL_fp3gls-_yoSs4dRoyKIqlK1Z23j4olgWpcAxCRPQvtBevcxdifKlYGjUNhgDCnDUhI7vDKfn6yNnwmS4mUWMr5KOSQsA0n-Xp2QoiAJEXmZNWJITqsR3f7lP1jRrsYm8RxZ6TJ0OlVbNNFbmlFlA81-TwM29mP1uhCg27cLbxL-BSG_T47V5bVWxCTC3hqiMBxLJgjqfYTU5vvz_4UPyccfiWKpqGbLt17kp4weg-o6vw' https://e2e-xdee-controller-manager-metrics-service.e2e-xdee-system.svc.cluster.local:8443/metrics"], "securityContext": { "allowPrivilegeEscalation": false, "capabilities": { "drop": ["ALL"] }, "runAsNonRoot": true, "runAsUser": 1000, "seccompProfile": { "type": "RuntimeDefault" } } }], "serviceAccountName": "e2e-xdee-controller-manager" } } STEP: validating that the curl pod fail as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:555 @ 06/02/25 08:10:34.165 running: kubectl -n e2e-xdee-system get pods curl -o jsonpath={.status.phase} STEP: validating that the metrics endpoint is not working as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:566 @ 06/02/25 08:10:34.252 running: kubectl -n e2e-xdee-system logs curl running: kubectl -n e2e-xdee-system logs curl STEP: cleaning up the curl pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:611 @ 06/02/25 08:10:35.446 running: kubectl -n e2e-xdee-system delete pods/curl STEP: validating that mutating and validating webhooks are working fine - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:344 @ 06/02/25 08:10:37.334 running: kubectl -n e2e-xdee-system get -f config/samples/barxdee_v1alpha1_fooxdee.yaml -o go-template={{ .spec.count }} STEP: creating a namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:356 @ 06/02/25 08:10:37.415 running: kubectl create namespace test-webhooks STEP: applying the CR in the created namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:361 @ 06/02/25 08:10:37.493 running: kubectl apply -n test-webhooks -f config/samples/barxdee_v1alpha1_fooxdee.yaml STEP: validating that mutating webhooks are working fine outside of the manager's namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:369 @ 06/02/25 08:10:37.591 running: kubectl get -n test-webhooks -f config/samples/barxdee_v1alpha1_fooxdee.yaml -o go-template={{ .spec.count }} STEP: removing the namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:382 @ 06/02/25 08:10:37.672 running: kubectl delete namespace test-webhooks STEP: validating the conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:386 @ 06/02/25 08:10:43.02 STEP: modifying the ConversionTest CR sample to set `size` for conversion testing - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:389 @ 06/02/25 08:10:43.02 STEP: applying the modified ConversionTest CR in v1 for conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:399 @ 06/02/25 08:10:43.021 running: kubectl -n e2e-xdee-system apply -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-xdee/config/samples/barxdee_v1_conversiontest.yaml STEP: waiting for the ConversionTest CR to appear - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:403 @ 06/02/25 08:10:43.121 running: kubectl -n e2e-xdee-system get conversiontest conversiontest-sample STEP: validating that the converted resource in v2 has replicas == 3 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:409 @ 06/02/25 08:10:43.207 running: kubectl -n e2e-xdee-system get conversiontest conversiontest-sample -o jsonpath={.spec.replicas} < Exit [It] should generate a runnable project without metrics exposed - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:87 @ 06/02/25 08:10:43.294 (1m49.635s) > Enter [AfterEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:59 @ 06/02/25 08:10:43.294 STEP: By removing restricted namespace label - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:60 @ 06/02/25 08:10:43.294 running: kubectl label ns e2e-xdee-system pod-security.kubernetes.io/enforce- STEP: clean up API objects created during the test - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:63 @ 06/02/25 08:10:43.383 running: make undeploy STEP: removing controller image and working dir - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:66 @ 06/02/25 08:10:51.802 running: docker rmi -f e2e-test/controller-manager:xdee < Exit [AfterEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:59 @ 06/02/25 08:10:51.864 (8.57s) • [118.283 seconds] ------------------------------ kubebuilder /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:48 plugin go/v4 /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:49 should generate a runnable project with metrics protected by network policies /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:91 > Enter [BeforeEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:52 @ 06/02/25 08:10:51.864 running: kubectl version -o json cleaning up tools preparing testing directory: /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-pdea < Exit [BeforeEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:52 @ 06/02/25 08:10:51.941 (77ms) > Enter [It] should generate a runnable project with metrics protected by network policies - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:91 @ 06/02/25 08:10:51.941 STEP: initializing a project - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:232 @ 06/02/25 08:10:51.941 running: kubebuilder init --plugins go/v4 --project-version 3 --domain example.compdea STEP: creating API definition - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:209 @ 06/02/25 08:10:52.587 running: kubebuilder create api --group barpdea --version v1alpha1 --kind Foopdea --namespaced --resource --controller --make=false STEP: implementing the API - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:221 @ 06/02/25 08:10:52.866 STEP: uncomment kustomization.yaml to enable network policy - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:135 @ 06/02/25 08:10:52.867 STEP: creating manager namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:114 @ 06/02/25 08:10:52.867 running: kubectl create ns e2e-pdea-system STEP: labeling the namespace to enforce the restricted security policy - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:118 @ 06/02/25 08:10:52.947 running: kubectl label --overwrite ns e2e-pdea-system pod-security.kubernetes.io/enforce=restricted STEP: updating the go.mod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:122 @ 06/02/25 08:10:53.038 running: go mod tidy STEP: run make all - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:126 @ 06/02/25 08:10:53.242 running: make all STEP: building the controller image - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:130 @ 06/02/25 08:11:05.558 running: make docker-build IMG=e2e-test/controller-manager:pdea STEP: loading the controller docker image into the kind cluster - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:134 @ 06/02/25 08:12:02.681 running: kind load docker-image e2e-test/controller-manager:pdea --name kind STEP: deploying the controller-manager - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:139 @ 06/02/25 08:12:06.286 running: make deploy IMG=e2e-test/controller-manager:pdea STEP: Checking controllerManager and getting the name of the Pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:180 @ 06/02/25 08:12:11.635 STEP: validating that the controller-manager pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:433 @ 06/02/25 08:12:11.635 running: kubectl -n e2e-pdea-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-pdea-system get pods e2e-pdea-controller-manager-85fc5cb7b5-fk5xl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-pdea-system get pods e2e-pdea-controller-manager-85fc5cb7b5-fk5xl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system describe all Name: e2e-pdea-controller-manager-85fc5cb7b5-fk5xl Namespace: e2e-pdea-system Priority: 0 Service Account: e2e-pdea-controller-manager Node: kind-control-plane/172.18.0.2 Start Time: Mon, 02 Jun 2025 08:12:11 +0000 Labels: app.kubernetes.io/name=e2e-pdea control-plane=controller-manager pod-template-hash=85fc5cb7b5 Annotations: kubectl.kubernetes.io/default-container: manager Status: Running SeccompProfile: RuntimeDefault IP: 10.244.0.24 IPs: IP: 10.244.0.24 Controlled By: ReplicaSet/e2e-pdea-controller-manager-85fc5cb7b5 Containers: manager: Container ID: containerd://e8ea8467916bab9ee922c85f2d456f42f1e73716c12ec5676993fafbb9f4cca3 Image: e2e-test/controller-manager:pdea Image ID: sha256:d4e5939c942f3eee41132e2cbc7a6c1a62ebb550aaee20dfc221f83bbadd6f24 Port: Host Port: Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 State: Running Started: Mon, 02 Jun 2025 08:12:12 +0000 Ready: False Restart Count: 0 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d69dk (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-d69dk: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt Optional: false DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2s default-scheduler Successfully assigned e2e-pdea-system/e2e-pdea-controller-manager-85fc5cb7b5-fk5xl to kind-control-plane Normal Pulled 1s kubelet Container image "e2e-test/controller-manager:pdea" already present on machine Normal Created 1s kubelet Created container: manager Normal Started 1s kubelet Started container manager Name: e2e-pdea-controller-manager-metrics-service Namespace: e2e-pdea-system Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-pdea control-plane=controller-manager Annotations: Selector: app.kubernetes.io/name=e2e-pdea,control-plane=controller-manager Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.206.193 IPs: 10.96.206.193 Port: https 8443/TCP TargetPort: 8443/TCP Endpoints: Session Affinity: None Internal Traffic Policy: Cluster Events: Name: e2e-pdea-controller-manager Namespace: e2e-pdea-system CreationTimestamp: Mon, 02 Jun 2025 08:12:11 +0000 Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-pdea control-plane=controller-manager Annotations: deployment.kubernetes.io/revision: 1 Selector: app.kubernetes.io/name=e2e-pdea,control-plane=controller-manager Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app.kubernetes.io/name=e2e-pdea control-plane=controller-manager Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-pdea-controller-manager Containers: manager: Image: e2e-test/controller-manager:pdea Port: Host Port: Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: Volumes: Node-Selectors: Tolerations: Conditions: Type Status Reason ---- ------ ------ Available False MinimumReplicasUnavailable Progressing True ReplicaSetUpdated OldReplicaSets: NewReplicaSet: e2e-pdea-controller-manager-85fc5cb7b5 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 2s deployment-controller Scaled up replica set e2e-pdea-controller-manager-85fc5cb7b5 from 0 to 1 Name: e2e-pdea-controller-manager-85fc5cb7b5 Namespace: e2e-pdea-system Selector: app.kubernetes.io/name=e2e-pdea,control-plane=controller-manager,pod-template-hash=85fc5cb7b5 Labels: app.kubernetes.io/name=e2e-pdea control-plane=controller-manager pod-template-hash=85fc5cb7b5 Annotations: deployment.kubernetes.io/desired-replicas: 1 deployment.kubernetes.io/max-replicas: 2 deployment.kubernetes.io/revision: 1 Controlled By: Deployment/e2e-pdea-controller-manager Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app.kubernetes.io/name=e2e-pdea control-plane=controller-manager pod-template-hash=85fc5cb7b5 Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-pdea-controller-manager Containers: manager: Image: e2e-test/controller-manager:pdea Port: Host Port: Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: Volumes: Node-Selectors: Tolerations: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 2s replicaset-controller Created pod: e2e-pdea-controller-manager-85fc5cb7b5-fk5xl STEP: Checking if all flags are applied to the manager pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:183 @ 06/02/25 08:12:13.18 running: kubectl -n e2e-pdea-system get pod e2e-pdea-controller-manager-85fc5cb7b5-fk5xl -o jsonpath={.spec.containers[0].args} STEP: validating that the Prometheus manager has provisioned the Service - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:195 @ 06/02/25 08:12:13.266 running: kubectl get Service prometheus-operator STEP: validating that the ServiceMonitor for Prometheus is applied in the namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:203 @ 06/02/25 08:12:13.351 running: kubectl -n e2e-pdea-system get ServiceMonitor STEP: labeling the namespace to allow consume the metrics - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:211 @ 06/02/25 08:12:13.44 running: kubectl label namespaces e2e-pdea-system metrics=enabled STEP: Ensuring the Allow Metrics Traffic NetworkPolicy exists - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:215 @ 06/02/25 08:12:13.529 running: kubectl -n e2e-pdea-system get networkpolicy e2e-pdea-allow-metrics-traffic END STEP: Ensuring the Allow Metrics Traffic NetworkPolicy exists - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:215 @ 06/02/25 08:12:13.619 (89ms) STEP: creating an instance of the CR - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:306 @ 06/02/25 08:12:13.619 running: kubectl -n e2e-pdea-system apply -f config/samples/barpdea_v1alpha1_foopdea.yaml STEP: checking the metrics values to validate that the created resource object gets reconciled - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:330 @ 06/02/25 08:12:13.741 running: kubectl get clusterrolebinding metrics-pdea running: kubectl create clusterrolebinding metrics-pdea --clusterrole=e2e-pdea-metrics-reader --serviceaccount=e2e-pdea-system:e2e-pdea-controller-manager running: kubectl create --raw /api/v1/namespaces/e2e-pdea-system/serviceaccounts/e2e-pdea-controller-manager/token -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-pdea/e2e-pdea-controller-manager-token-request STEP: validating that the controller-manager service is available - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:491 @ 06/02/25 08:12:14.023 running: kubectl -n e2e-pdea-system get service e2e-pdea-controller-manager-metrics-service STEP: ensuring the service endpoint is ready - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:498 @ 06/02/25 08:12:14.119 running: kubectl -n e2e-pdea-system get endpoints e2e-pdea-controller-manager-metrics-service -o jsonpath={.subsets[*].addresses[*].ip} STEP: creating a curl pod to access the metrics endpoint - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:512 @ 06/02/25 08:12:14.217 running: kubectl -n e2e-pdea-system run curl --restart=Never --namespace e2e-pdea-system --image=curlimages/curl:latest --overrides { "spec": { "containers": [{ "name": "curl", "image": "curlimages/curl:latest", "command": ["/bin/sh", "-c"], "args": ["curl -v -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1JN1BlSWx0ZEttUFJLRURHWnlpN3Fhamh6YkdDMWJHYUIxU1NaUFExbVkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ4ODU1NTM0LCJpYXQiOjE3NDg4NTE5MzQsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiODY5ZjgwNDktZDk3OC00ZWQ4LThkZTEtZGFmMWFjMWE2NDAzIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJlMmUtcGRlYS1zeXN0ZW0iLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZTJlLXBkZWEtY29udHJvbGxlci1tYW5hZ2VyIiwidWlkIjoiYzEyMDA4YTAtODBjNC00OTQxLWJhN2ItZWE1ZjIwYzM4MzYxIn19LCJuYmYiOjE3NDg4NTE5MzQsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDplMmUtcGRlYS1zeXN0ZW06ZTJlLXBkZWEtY29udHJvbGxlci1tYW5hZ2VyIn0.ULpkP4SpHq1xpISI5EVOOLRNdOUafu7Cn4qUhNe6_ZFBmBphvsYBaKlkDArgf48jJU4rm4bVNX_UtFR4POulcSggsixvv37gq72BF2BJ-cXzPlsshkxT2WhYW3p0eKulaOsnhz3fdTu4fRPeC-P2-GL4BMjCHk8Xx-VVROwQTip2Qs_TONB36Hvcaqx3HVMb2EStGD_FNElkdprg6zhiz6Dhp-n2spf9Bq7WGdttpt0wlqeO9A6H_JL-GXCUOfnu4B70EX4MSzOk9w3EsOMUkZaVer7vf-NIs6wlEXHkgUyTCGX8yEGDdeqyr9UDNKKtmV3DyvBFOMzs1fz8BE6JDg' https://e2e-pdea-controller-manager-metrics-service.e2e-pdea-system.svc.cluster.local:8443/metrics"], "securityContext": { "allowPrivilegeEscalation": false, "capabilities": { "drop": ["ALL"] }, "runAsNonRoot": true, "runAsUser": 1000, "seccompProfile": { "type": "RuntimeDefault" } } }], "serviceAccountName": "e2e-pdea-controller-manager" } } STEP: validating that the curl pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:517 @ 06/02/25 08:12:14.324 running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-pdea-system get pods curl -o jsonpath={.status.phase} [FAILED] Timed out after 240.001s. The function passed to Eventually failed at /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:524 with: curl pod in Failed status Expected : Failed to equal : Succeeded In [It] at: /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:526 @ 06/02/25 08:16:14.325 < Exit [It] should generate a runnable project with metrics protected by network policies - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:91 @ 06/02/25 08:16:14.325 (5m22.385s) > Enter [AfterEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:59 @ 06/02/25 08:16:14.325 STEP: By removing restricted namespace label - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:60 @ 06/02/25 08:16:14.326 running: kubectl label ns e2e-pdea-system pod-security.kubernetes.io/enforce- STEP: clean up API objects created during the test - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:63 @ 06/02/25 08:16:14.419 running: make undeploy STEP: removing controller image and working dir - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:66 @ 06/02/25 08:16:21.172 running: docker rmi -f e2e-test/controller-manager:pdea < Exit [AfterEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:59 @ 06/02/25 08:16:21.232 (6.906s) • [FAILED] [329.368 seconds] kubebuilder /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:48 plugin go/v4 /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:49 [It] should generate a runnable project with metrics protected by network policies /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:91 ------------------------------ kubebuilder /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:48 plugin go/v4 /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:49 should generate a runnable project with webhooks and metrics protected by network policies /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:95 > Enter [BeforeEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:52 @ 06/02/25 08:16:21.232 running: kubectl version -o json cleaning up tools preparing testing directory: /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-wixx < Exit [BeforeEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:52 @ 06/02/25 08:16:21.309 (77ms) > Enter [It] should generate a runnable project with webhooks and metrics protected by network policies - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:95 @ 06/02/25 08:16:21.309 STEP: initializing a project - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:232 @ 06/02/25 08:16:21.31 running: kubebuilder init --plugins go/v4 --project-version 3 --domain example.comwixx STEP: creating API definition - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:209 @ 06/02/25 08:16:21.854 running: kubebuilder create api --group barwixx --version v1alpha1 --kind Foowixx --namespaced --resource --controller --make=false STEP: implementing the API - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:221 @ 06/02/25 08:16:22.073 STEP: scaffolding mutating and validating webhooks - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:146 @ 06/02/25 08:16:22.074 running: kubebuilder create webhook --group barwixx --version v1alpha1 --kind Foowixx --defaulting --programmatic-validation --make=false STEP: implementing the mutating and validating webhooks - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:157 @ 06/02/25 08:16:22.772 STEP: scaffolding conversion webhooks for testing ConversionTest v1 to v2 conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:380 @ 06/02/25 08:16:22.773 running: kubebuilder create api --group barwixx --version v1 --kind ConversionTest --controller=true --resource=true --make=false running: kubebuilder create api --group barwixx --version v2 --kind ConversionTest --controller=false --resource=true --make=false STEP: setting up the conversion webhook for v1 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:405 @ 06/02/25 08:16:23.155 running: kubebuilder create webhook --group barwixx --version v1 --kind ConversionTest --conversion --spoke v2 --make=false STEP: implementing the size spec in v1 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:417 @ 06/02/25 08:16:23.408 STEP: implementing the replicas spec in v2 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:425 @ 06/02/25 08:16:23.408 STEP: uncomment kustomization.yaml to enable network policy - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:185 @ 06/02/25 08:16:23.411 STEP: creating manager namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:114 @ 06/02/25 08:16:23.412 running: kubectl create ns e2e-wixx-system STEP: labeling the namespace to enforce the restricted security policy - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:118 @ 06/02/25 08:16:23.492 running: kubectl label --overwrite ns e2e-wixx-system pod-security.kubernetes.io/enforce=restricted STEP: updating the go.mod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:122 @ 06/02/25 08:16:23.586 running: go mod tidy STEP: run make all - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:126 @ 06/02/25 08:16:23.79 running: make all STEP: building the controller image - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:130 @ 06/02/25 08:16:35.303 running: make docker-build IMG=e2e-test/controller-manager:wixx STEP: loading the controller docker image into the kind cluster - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:134 @ 06/02/25 08:17:35.24 running: kind load docker-image e2e-test/controller-manager:wixx --name kind STEP: deploying the controller-manager - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:139 @ 06/02/25 08:17:37.976 running: make deploy IMG=e2e-test/controller-manager:wixx STEP: Checking controllerManager and getting the name of the Pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:180 @ 06/02/25 08:17:43.813 STEP: validating that the controller-manager pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:433 @ 06/02/25 08:17:43.813 running: kubectl -n e2e-wixx-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-wixx-system get pods e2e-wixx-controller-manager-78fd45fd47-6tkrl -o jsonpath={.status.phase} running: kubectl -n e2e-wixx-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-wixx-system get pods e2e-wixx-controller-manager-78fd45fd47-6tkrl -o jsonpath={.status.phase} running: kubectl -n e2e-wixx-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-wixx-system get pods e2e-wixx-controller-manager-78fd45fd47-6tkrl -o jsonpath={.status.phase} running: kubectl -n e2e-wixx-system describe all Name: e2e-wixx-controller-manager-78fd45fd47-6tkrl Namespace: e2e-wixx-system Priority: 0 Service Account: e2e-wixx-controller-manager Node: kind-control-plane/172.18.0.2 Start Time: Mon, 02 Jun 2025 08:17:43 +0000 Labels: app.kubernetes.io/name=e2e-wixx control-plane=controller-manager pod-template-hash=78fd45fd47 Annotations: kubectl.kubernetes.io/default-container: manager Status: Running SeccompProfile: RuntimeDefault IP: 10.244.0.26 IPs: IP: 10.244.0.26 Controlled By: ReplicaSet/e2e-wixx-controller-manager-78fd45fd47 Containers: manager: Container ID: containerd://d4505effa68eb879a254c3586766494ad0c0b233f25a4228c58f83977a09d171 Image: e2e-test/controller-manager:wixx Image ID: sha256:6dd5f771f25894f6f5d76d1c2232e3aa0970244f5fe3a4e9f64eadb52a7e0729 Port: 9443/TCP Host Port: 0/TCP Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 --metrics-cert-path=/tmp/k8s-metrics-server/metrics-certs --webhook-cert-path=/tmp/k8s-webhook-server/serving-certs State: Running Started: Mon, 02 Jun 2025 08:17:45 +0000 Ready: False Restart Count: 0 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /tmp/k8s-metrics-server/metrics-certs from metrics-certs (ro) /tmp/k8s-webhook-server/serving-certs from webhook-certs (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w6btt (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready False ContainersReady False PodScheduled True Volumes: metrics-certs: Type: Secret (a volume populated by a Secret) SecretName: metrics-server-cert Optional: false webhook-certs: Type: Secret (a volume populated by a Secret) SecretName: webhook-server-cert Optional: false kube-api-access-w6btt: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt Optional: false DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3s default-scheduler Successfully assigned e2e-wixx-system/e2e-wixx-controller-manager-78fd45fd47-6tkrl to kind-control-plane Warning FailedMount 3s kubelet MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found Warning FailedMount 3s kubelet MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found Normal Pulled 2s kubelet Container image "e2e-test/controller-manager:wixx" already present on machine Normal Created 2s kubelet Created container: manager Normal Started 1s kubelet Started container manager Name: e2e-wixx-controller-manager-metrics-service Namespace: e2e-wixx-system Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-wixx control-plane=controller-manager Annotations: Selector: app.kubernetes.io/name=e2e-wixx,control-plane=controller-manager Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.45.109 IPs: 10.96.45.109 Port: https 8443/TCP TargetPort: 8443/TCP Endpoints: Session Affinity: None Internal Traffic Policy: Cluster Events: Name: e2e-wixx-webhook-service Namespace: e2e-wixx-system Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-wixx Annotations: Selector: app.kubernetes.io/name=e2e-wixx,control-plane=controller-manager Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.32.85 IPs: 10.96.32.85 Port: 443/TCP TargetPort: 9443/TCP Endpoints: Session Affinity: None Internal Traffic Policy: Cluster Events: Name: e2e-wixx-controller-manager Namespace: e2e-wixx-system CreationTimestamp: Mon, 02 Jun 2025 08:17:43 +0000 Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-wixx control-plane=controller-manager Annotations: deployment.kubernetes.io/revision: 1 Selector: app.kubernetes.io/name=e2e-wixx,control-plane=controller-manager Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app.kubernetes.io/name=e2e-wixx control-plane=controller-manager Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-wixx-controller-manager Containers: manager: Image: e2e-test/controller-manager:wixx Port: 9443/TCP Host Port: 0/TCP Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 --metrics-cert-path=/tmp/k8s-metrics-server/metrics-certs --webhook-cert-path=/tmp/k8s-webhook-server/serving-certs Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /tmp/k8s-metrics-server/metrics-certs from metrics-certs (ro) /tmp/k8s-webhook-server/serving-certs from webhook-certs (ro) Volumes: metrics-certs: Type: Secret (a volume populated by a Secret) SecretName: metrics-server-cert Optional: false webhook-certs: Type: Secret (a volume populated by a Secret) SecretName: webhook-server-cert Optional: false Node-Selectors: Tolerations: Conditions: Type Status Reason ---- ------ ------ Available False MinimumReplicasUnavailable Progressing True ReplicaSetUpdated OldReplicaSets: NewReplicaSet: e2e-wixx-controller-manager-78fd45fd47 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 3s deployment-controller Scaled up replica set e2e-wixx-controller-manager-78fd45fd47 from 0 to 1 Name: e2e-wixx-controller-manager-78fd45fd47 Namespace: e2e-wixx-system Selector: app.kubernetes.io/name=e2e-wixx,control-plane=controller-manager,pod-template-hash=78fd45fd47 Labels: app.kubernetes.io/name=e2e-wixx control-plane=controller-manager pod-template-hash=78fd45fd47 Annotations: deployment.kubernetes.io/desired-replicas: 1 deployment.kubernetes.io/max-replicas: 2 deployment.kubernetes.io/revision: 1 Controlled By: Deployment/e2e-wixx-controller-manager Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app.kubernetes.io/name=e2e-wixx control-plane=controller-manager pod-template-hash=78fd45fd47 Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-wixx-controller-manager Containers: manager: Image: e2e-test/controller-manager:wixx Port: 9443/TCP Host Port: 0/TCP Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 --metrics-cert-path=/tmp/k8s-metrics-server/metrics-certs --webhook-cert-path=/tmp/k8s-webhook-server/serving-certs Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /tmp/k8s-metrics-server/metrics-certs from metrics-certs (ro) /tmp/k8s-webhook-server/serving-certs from webhook-certs (ro) Volumes: metrics-certs: Type: Secret (a volume populated by a Secret) SecretName: metrics-server-cert Optional: false webhook-certs: Type: Secret (a volume populated by a Secret) SecretName: webhook-server-cert Optional: false Node-Selectors: Tolerations: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 3s replicaset-controller Created pod: e2e-wixx-controller-manager-78fd45fd47-6tkrl STEP: Checking if all flags are applied to the manager pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:183 @ 06/02/25 08:17:46.547 running: kubectl -n e2e-wixx-system get pod e2e-wixx-controller-manager-78fd45fd47-6tkrl -o jsonpath={.spec.containers[0].args} STEP: validating that the Prometheus manager has provisioned the Service - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:195 @ 06/02/25 08:17:46.635 running: kubectl get Service prometheus-operator STEP: validating that the ServiceMonitor for Prometheus is applied in the namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:203 @ 06/02/25 08:17:46.721 running: kubectl -n e2e-wixx-system get ServiceMonitor STEP: labeling the namespace to allow consume the metrics - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:211 @ 06/02/25 08:17:46.809 running: kubectl label namespaces e2e-wixx-system metrics=enabled STEP: Ensuring the Allow Metrics Traffic NetworkPolicy exists - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:215 @ 06/02/25 08:17:46.898 running: kubectl -n e2e-wixx-system get networkpolicy e2e-wixx-allow-metrics-traffic END STEP: Ensuring the Allow Metrics Traffic NetworkPolicy exists - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:215 @ 06/02/25 08:17:46.984 (86ms) STEP: labeling the namespace to allow webhooks traffic - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:228 @ 06/02/25 08:17:46.984 running: kubectl label namespaces e2e-wixx-system webhook=enabled STEP: Ensuring the allow-webhook-traffic NetworkPolicy exists - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:233 @ 06/02/25 08:17:47.074 running: kubectl -n e2e-wixx-system get networkpolicy e2e-wixx-allow-webhook-traffic END STEP: Ensuring the allow-webhook-traffic NetworkPolicy exists - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:233 @ 06/02/25 08:17:47.16 (86ms) STEP: validating that cert-manager has provisioned the certificate Secret - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:247 @ 06/02/25 08:17:47.16 running: kubectl -n e2e-wixx-system get secrets webhook-server-cert STEP: validating that the mutating|validating webhooks have the CA injected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:260 @ 06/02/25 08:17:47.246 running: kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io e2e-wixx-mutating-webhook-configuration -o go-template={{ range .webhooks }}{{ .clientConfig.caBundle }}{{ end }} running: kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io e2e-wixx-validating-webhook-configuration -o go-template={{ range .webhooks }}{{ .clientConfig.caBundle }}{{ end }} STEP: validating that the CA injection is applied for CRD conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:284 @ 06/02/25 08:17:47.448 running: kubectl get customresourcedefinition.apiextensions.k8s.io -o jsonpath={.items[?(@.spec.names.kind=='ConversionTest')].spec.conversion.webhook.clientConfig.caBundle} STEP: creating an instance of the CR - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:306 @ 06/02/25 08:17:48.357 running: kubectl -n e2e-wixx-system apply -f config/samples/barwixx_v1alpha1_foowixx.yaml running: kubectl -n e2e-wixx-system apply -f config/samples/barwixx_v1alpha1_foowixx.yaml running: kubectl -n e2e-wixx-system apply -f config/samples/barwixx_v1alpha1_foowixx.yaml running: kubectl -n e2e-wixx-system apply -f config/samples/barwixx_v1alpha1_foowixx.yaml running: kubectl -n e2e-wixx-system apply -f config/samples/barwixx_v1alpha1_foowixx.yaml running: kubectl -n e2e-wixx-system apply -f config/samples/barwixx_v1alpha1_foowixx.yaml running: kubectl -n e2e-wixx-system apply -f config/samples/barwixx_v1alpha1_foowixx.yaml running: kubectl -n e2e-wixx-system apply -f config/samples/barwixx_v1alpha1_foowixx.yaml running: kubectl -n e2e-wixx-system apply -f config/samples/barwixx_v1alpha1_foowixx.yaml STEP: checking the metrics values to validate that the created resource object gets reconciled - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:330 @ 06/02/25 08:17:57.243 running: kubectl get clusterrolebinding metrics-wixx running: kubectl create clusterrolebinding metrics-wixx --clusterrole=e2e-wixx-metrics-reader --serviceaccount=e2e-wixx-system:e2e-wixx-controller-manager running: kubectl create --raw /api/v1/namespaces/e2e-wixx-system/serviceaccounts/e2e-wixx-controller-manager/token -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-wixx/e2e-wixx-controller-manager-token-request STEP: validating that the controller-manager service is available - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:491 @ 06/02/25 08:17:57.49 running: kubectl -n e2e-wixx-system get service e2e-wixx-controller-manager-metrics-service STEP: ensuring the service endpoint is ready - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:498 @ 06/02/25 08:17:57.576 running: kubectl -n e2e-wixx-system get endpoints e2e-wixx-controller-manager-metrics-service -o jsonpath={.subsets[*].addresses[*].ip} STEP: creating a curl pod to access the metrics endpoint - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:512 @ 06/02/25 08:17:57.662 running: kubectl -n e2e-wixx-system run curl --restart=Never --namespace e2e-wixx-system --image=curlimages/curl:latest --overrides { "spec": { "containers": [{ "name": "curl", "image": "curlimages/curl:latest", "command": ["/bin/sh", "-c"], "args": ["curl -v -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1JN1BlSWx0ZEttUFJLRURHWnlpN3Fhamh6YkdDMWJHYUIxU1NaUFExbVkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ4ODU1ODc3LCJpYXQiOjE3NDg4NTIyNzcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMDQzODdhN2ItYzIzYy00N2Q0LTljNzItMzRiYTM1NGQzMTE1Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJlMmUtd2l4eC1zeXN0ZW0iLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZTJlLXdpeHgtY29udHJvbGxlci1tYW5hZ2VyIiwidWlkIjoiMjAwZTIyMGItMmU3Ny00OTJlLWJlNmEtYTc1Mjg0Y2VhMjcxIn19LCJuYmYiOjE3NDg4NTIyNzcsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDplMmUtd2l4eC1zeXN0ZW06ZTJlLXdpeHgtY29udHJvbGxlci1tYW5hZ2VyIn0.sFm_f2d_jYFLDT_Zb6P2m-U_5l-_Ljd5Nwerp_-0hKL_g6APkGBC2pKZ0HAQkJJxUD65WHFvwzuixjzc6vi-jVpbEsiczc8DZDI5bb2rA5l1__GYXa5jjHxP-ESoiFZ4l60d3Y678VJQ-oSQkSVZLYsl3iVyCaYd-aKImYPJLFXiq10UtR5Z_Em6-Qf3Y6UaDxOtE269F0q4mU8sLJ5E_lKBzPZhzUq1grU5h6z2C1NoH3HhyE2b9nl7VKu2Exa2D_JohJjMrUxUJC3U8bMpO2fw_TFaMz8yXylBl3Dxzg7FIjpEqHztGF0sC7uqyq9MhVcSI_WuUwUkAN8hHbPUEw' https://e2e-wixx-controller-manager-metrics-service.e2e-wixx-system.svc.cluster.local:8443/metrics"], "securityContext": { "allowPrivilegeEscalation": false, "capabilities": { "drop": ["ALL"] }, "runAsNonRoot": true, "runAsUser": 1000, "seccompProfile": { "type": "RuntimeDefault" } } }], "serviceAccountName": "e2e-wixx-controller-manager" } } STEP: validating that the curl pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:517 @ 06/02/25 08:17:57.752 running: kubectl -n e2e-wixx-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-wixx-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-wixx-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-wixx-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-wixx-system get pods curl -o jsonpath={.status.phase} STEP: validating that the metrics endpoint is serving as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:528 @ 06/02/25 08:18:02.189 running: kubectl -n e2e-wixx-system logs curl STEP: cleaning up the curl pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:611 @ 06/02/25 08:18:02.324 running: kubectl -n e2e-wixx-system delete pods/curl STEP: validating that mutating and validating webhooks are working fine - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:344 @ 06/02/25 08:18:02.416 running: kubectl -n e2e-wixx-system get -f config/samples/barwixx_v1alpha1_foowixx.yaml -o go-template={{ .spec.count }} STEP: creating a namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:356 @ 06/02/25 08:18:02.5 running: kubectl create namespace test-webhooks STEP: applying the CR in the created namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:361 @ 06/02/25 08:18:02.582 running: kubectl apply -n test-webhooks -f config/samples/barwixx_v1alpha1_foowixx.yaml STEP: validating that mutating webhooks are working fine outside of the manager's namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:369 @ 06/02/25 08:18:02.683 running: kubectl get -n test-webhooks -f config/samples/barwixx_v1alpha1_foowixx.yaml -o go-template={{ .spec.count }} STEP: removing the namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:382 @ 06/02/25 08:18:02.766 running: kubectl delete namespace test-webhooks STEP: validating the conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:386 @ 06/02/25 08:18:08.109 STEP: modifying the ConversionTest CR sample to set `size` for conversion testing - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:389 @ 06/02/25 08:18:08.109 STEP: applying the modified ConversionTest CR in v1 for conversion - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:399 @ 06/02/25 08:18:08.109 running: kubectl -n e2e-wixx-system apply -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-wixx/config/samples/barwixx_v1_conversiontest.yaml STEP: waiting for the ConversionTest CR to appear - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:403 @ 06/02/25 08:18:08.208 running: kubectl -n e2e-wixx-system get conversiontest conversiontest-sample STEP: validating that the converted resource in v2 has replicas == 3 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:409 @ 06/02/25 08:18:08.295 running: kubectl -n e2e-wixx-system get conversiontest conversiontest-sample -o jsonpath={.spec.replicas} STEP: validating conversion metrics to confirm conversion operations - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:423 @ 06/02/25 08:18:08.382 running: kubectl get clusterrolebinding metrics-wixx running: kubectl create --raw /api/v1/namespaces/e2e-wixx-system/serviceaccounts/e2e-wixx-controller-manager/token -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-wixx/e2e-wixx-controller-manager-token-request STEP: validating that the controller-manager service is available - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:491 @ 06/02/25 08:18:08.548 running: kubectl -n e2e-wixx-system get service e2e-wixx-controller-manager-metrics-service STEP: ensuring the service endpoint is ready - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:498 @ 06/02/25 08:18:08.637 running: kubectl -n e2e-wixx-system get endpoints e2e-wixx-controller-manager-metrics-service -o jsonpath={.subsets[*].addresses[*].ip} STEP: creating a curl pod to access the metrics endpoint - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:512 @ 06/02/25 08:18:08.722 running: kubectl -n e2e-wixx-system run curl --restart=Never --namespace e2e-wixx-system --image=curlimages/curl:latest --overrides { "spec": { "containers": [{ "name": "curl", "image": "curlimages/curl:latest", "command": ["/bin/sh", "-c"], "args": ["curl -v -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1JN1BlSWx0ZEttUFJLRURHWnlpN3Fhamh6YkdDMWJHYUIxU1NaUFExbVkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ4ODU1ODg4LCJpYXQiOjE3NDg4NTIyODgsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiNDQyNjEzZWYtN2YwZS00YTQzLWFjZTctYWI0NzFjNTg4MzI5Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJlMmUtd2l4eC1zeXN0ZW0iLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZTJlLXdpeHgtY29udHJvbGxlci1tYW5hZ2VyIiwidWlkIjoiMjAwZTIyMGItMmU3Ny00OTJlLWJlNmEtYTc1Mjg0Y2VhMjcxIn19LCJuYmYiOjE3NDg4NTIyODgsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDplMmUtd2l4eC1zeXN0ZW06ZTJlLXdpeHgtY29udHJvbGxlci1tYW5hZ2VyIn0.Oa0_xOnF1ZLvQ0q9FWVjiKQg7iXJ0Nw1BaUO9lqZ_fEzAWumH3KHgT-VTYBA9jLUVdLHQL_vyT7yD2251HPfZwXMYvwwwyjTtmAkTQt8Bd4oIT9kMyVyF4HJbqiobi8HTQPWHw9-N1Ol3vDNZaPheildNmt7S_l_rdQKC-ke8B8jHfRjcO82E-tPQzrd_SC_9wJw6UDsDykOpWl4V6e0Ts9di-oJMEYCg3H4s8GIV3w_RhYx5QArH5KqqipISap3Dxngnn3w6ypCWDfxw-4KQn4fAqrzr9OZQ3WTuybSLJAq97V7ZMR0BxUfFRrRLUcGN1vp5DdWVPSg0UvFxIbc-Q' https://e2e-wixx-controller-manager-metrics-service.e2e-wixx-system.svc.cluster.local:8443/metrics"], "securityContext": { "allowPrivilegeEscalation": false, "capabilities": { "drop": ["ALL"] }, "runAsNonRoot": true, "runAsUser": 1000, "seccompProfile": { "type": "RuntimeDefault" } } }], "serviceAccountName": "e2e-wixx-controller-manager" } } STEP: validating that the curl pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:517 @ 06/02/25 08:18:08.811 running: kubectl -n e2e-wixx-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-wixx-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-wixx-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-wixx-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-wixx-system get pods curl -o jsonpath={.status.phase} STEP: validating that the metrics endpoint is serving as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:528 @ 06/02/25 08:18:13.243 running: kubectl -n e2e-wixx-system logs curl STEP: cleaning up the curl pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:611 @ 06/02/25 08:18:13.369 running: kubectl -n e2e-wixx-system delete pods/curl < Exit [It] should generate a runnable project with webhooks and metrics protected by network policies - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:95 @ 06/02/25 08:18:13.461 (1m52.152s) > Enter [AfterEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:59 @ 06/02/25 08:18:13.461 STEP: By removing restricted namespace label - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:60 @ 06/02/25 08:18:13.461 running: kubectl label ns e2e-wixx-system pod-security.kubernetes.io/enforce- STEP: clean up API objects created during the test - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:63 @ 06/02/25 08:18:13.552 running: make undeploy STEP: removing controller image and working dir - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:66 @ 06/02/25 08:18:22.629 running: docker rmi -f e2e-test/controller-manager:wixx < Exit [AfterEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:59 @ 06/02/25 08:18:22.69 (9.229s) • [121.458 seconds] ------------------------------ kubebuilder /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:48 plugin go/v4 /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:49 should generate a runnable project with the manager running as restricted and without webhooks /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:99 > Enter [BeforeEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:52 @ 06/02/25 08:18:22.69 running: kubectl version -o json cleaning up tools preparing testing directory: /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-msov < Exit [BeforeEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:52 @ 06/02/25 08:18:22.767 (77ms) > Enter [It] should generate a runnable project with the manager running as restricted and without webhooks - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:99 @ 06/02/25 08:18:22.767 STEP: initializing a project - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:232 @ 06/02/25 08:18:22.767 running: kubebuilder init --plugins go/v4 --project-version 3 --domain example.commsov STEP: creating API definition - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:209 @ 06/02/25 08:18:23.614 running: kubebuilder create api --group barmsov --version v1alpha1 --kind Foomsov --namespaced --resource --controller --make=false STEP: implementing the API - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/generate_test.go:221 @ 06/02/25 08:18:23.843 STEP: creating manager namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:114 @ 06/02/25 08:18:23.843 running: kubectl create ns e2e-msov-system STEP: labeling the namespace to enforce the restricted security policy - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:118 @ 06/02/25 08:18:23.926 running: kubectl label --overwrite ns e2e-msov-system pod-security.kubernetes.io/enforce=restricted STEP: updating the go.mod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:122 @ 06/02/25 08:18:24.018 running: go mod tidy STEP: run make all - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:126 @ 06/02/25 08:18:24.2 running: make all STEP: building the controller image - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:130 @ 06/02/25 08:18:35.2 running: make docker-build IMG=e2e-test/controller-manager:msov STEP: loading the controller docker image into the kind cluster - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:134 @ 06/02/25 08:19:30.478 running: kind load docker-image e2e-test/controller-manager:msov --name kind STEP: deploying the controller-manager - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:139 @ 06/02/25 08:19:33.351 running: make deploy IMG=e2e-test/controller-manager:msov STEP: Checking controllerManager and getting the name of the Pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:180 @ 06/02/25 08:19:38.852 STEP: validating that the controller-manager pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:433 @ 06/02/25 08:19:38.852 running: kubectl -n e2e-msov-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-msov-system get pods e2e-msov-controller-manager-56bd7fb669-n5bfw -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }} running: kubectl -n e2e-msov-system get pods e2e-msov-controller-manager-56bd7fb669-n5bfw -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system describe all Name: e2e-msov-controller-manager-56bd7fb669-n5bfw Namespace: e2e-msov-system Priority: 0 Service Account: e2e-msov-controller-manager Node: kind-control-plane/172.18.0.2 Start Time: Mon, 02 Jun 2025 08:19:38 +0000 Labels: app.kubernetes.io/name=e2e-msov control-plane=controller-manager pod-template-hash=56bd7fb669 Annotations: kubectl.kubernetes.io/default-container: manager Status: Running SeccompProfile: RuntimeDefault IP: 10.244.0.29 IPs: IP: 10.244.0.29 Controlled By: ReplicaSet/e2e-msov-controller-manager-56bd7fb669 Containers: manager: Container ID: containerd://17898e5c3e435222d8be067d95a6cac0fb24c5b19d505b7a6d2a8aa996ede683 Image: e2e-test/controller-manager:msov Image ID: sha256:431d2f6a67f60cd7b2df7eb716d894202b6a84bc4ae85b620c0b629c5afe221a Port: Host Port: Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 State: Running Started: Mon, 02 Jun 2025 08:19:39 +0000 Ready: False Restart Count: 0 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4985c (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-4985c: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt Optional: false DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2s default-scheduler Successfully assigned e2e-msov-system/e2e-msov-controller-manager-56bd7fb669-n5bfw to kind-control-plane Normal Pulled 1s kubelet Container image "e2e-test/controller-manager:msov" already present on machine Normal Created 1s kubelet Created container: manager Normal Started 1s kubelet Started container manager Name: e2e-msov-controller-manager-metrics-service Namespace: e2e-msov-system Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-msov control-plane=controller-manager Annotations: Selector: app.kubernetes.io/name=e2e-msov,control-plane=controller-manager Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.230.175 IPs: 10.96.230.175 Port: https 8443/TCP TargetPort: 8443/TCP Endpoints: Session Affinity: None Internal Traffic Policy: Cluster Events: Name: e2e-msov-controller-manager Namespace: e2e-msov-system CreationTimestamp: Mon, 02 Jun 2025 08:19:38 +0000 Labels: app.kubernetes.io/managed-by=kustomize app.kubernetes.io/name=e2e-msov control-plane=controller-manager Annotations: deployment.kubernetes.io/revision: 1 Selector: app.kubernetes.io/name=e2e-msov,control-plane=controller-manager Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app.kubernetes.io/name=e2e-msov control-plane=controller-manager Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-msov-controller-manager Containers: manager: Image: e2e-test/controller-manager:msov Port: Host Port: Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: Volumes: Node-Selectors: Tolerations: Conditions: Type Status Reason ---- ------ ------ Available False MinimumReplicasUnavailable Progressing True ReplicaSetUpdated OldReplicaSets: NewReplicaSet: e2e-msov-controller-manager-56bd7fb669 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 2s deployment-controller Scaled up replica set e2e-msov-controller-manager-56bd7fb669 from 0 to 1 Name: e2e-msov-controller-manager-56bd7fb669 Namespace: e2e-msov-system Selector: app.kubernetes.io/name=e2e-msov,control-plane=controller-manager,pod-template-hash=56bd7fb669 Labels: app.kubernetes.io/name=e2e-msov control-plane=controller-manager pod-template-hash=56bd7fb669 Annotations: deployment.kubernetes.io/desired-replicas: 1 deployment.kubernetes.io/max-replicas: 2 deployment.kubernetes.io/revision: 1 Controlled By: Deployment/e2e-msov-controller-manager Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app.kubernetes.io/name=e2e-msov control-plane=controller-manager pod-template-hash=56bd7fb669 Annotations: kubectl.kubernetes.io/default-container: manager Service Account: e2e-msov-controller-manager Containers: manager: Image: e2e-test/controller-manager:msov Port: Host Port: Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 Limits: cpu: 500m memory: 128Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: Volumes: Node-Selectors: Tolerations: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 2s replicaset-controller Created pod: e2e-msov-controller-manager-56bd7fb669-n5bfw STEP: Checking if all flags are applied to the manager pod - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:183 @ 06/02/25 08:19:40.381 running: kubectl -n e2e-msov-system get pod e2e-msov-controller-manager-56bd7fb669-n5bfw -o jsonpath={.spec.containers[0].args} STEP: validating that the Prometheus manager has provisioned the Service - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:195 @ 06/02/25 08:19:40.468 running: kubectl get Service prometheus-operator STEP: validating that the ServiceMonitor for Prometheus is applied in the namespace - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:203 @ 06/02/25 08:19:40.557 running: kubectl -n e2e-msov-system get ServiceMonitor STEP: creating an instance of the CR - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:306 @ 06/02/25 08:19:40.645 running: kubectl -n e2e-msov-system apply -f config/samples/barmsov_v1alpha1_foomsov.yaml STEP: checking the metrics values to validate that the created resource object gets reconciled - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:330 @ 06/02/25 08:19:40.756 running: kubectl get clusterrolebinding metrics-msov running: kubectl create clusterrolebinding metrics-msov --clusterrole=e2e-msov-metrics-reader --serviceaccount=e2e-msov-system:e2e-msov-controller-manager running: kubectl create --raw /api/v1/namespaces/e2e-msov-system/serviceaccounts/e2e-msov-controller-manager/token -f /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-msov/e2e-msov-controller-manager-token-request STEP: validating that the controller-manager service is available - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:491 @ 06/02/25 08:19:41.004 running: kubectl -n e2e-msov-system get service e2e-msov-controller-manager-metrics-service STEP: ensuring the service endpoint is ready - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:498 @ 06/02/25 08:19:41.091 running: kubectl -n e2e-msov-system get endpoints e2e-msov-controller-manager-metrics-service -o jsonpath={.subsets[*].addresses[*].ip} STEP: creating a curl pod to access the metrics endpoint - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:512 @ 06/02/25 08:19:41.176 running: kubectl -n e2e-msov-system run curl --restart=Never --namespace e2e-msov-system --image=curlimages/curl:latest --overrides { "spec": { "containers": [{ "name": "curl", "image": "curlimages/curl:latest", "command": ["/bin/sh", "-c"], "args": ["curl -v -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1JN1BlSWx0ZEttUFJLRURHWnlpN3Fhamh6YkdDMWJHYUIxU1NaUFExbVkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ4ODU1OTgwLCJpYXQiOjE3NDg4NTIzODAsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiOGNhMTJlMDEtZDlhYi00NjlmLWEwNTMtMDBhNDFkNDZhYmIyIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJlMmUtbXNvdi1zeXN0ZW0iLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZTJlLW1zb3YtY29udHJvbGxlci1tYW5hZ2VyIiwidWlkIjoiNTIzNjE1ZmEtNTU4ZC00ZjQ4LTg1NzUtMWFiNWUzMDk5ODZhIn19LCJuYmYiOjE3NDg4NTIzODAsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDplMmUtbXNvdi1zeXN0ZW06ZTJlLW1zb3YtY29udHJvbGxlci1tYW5hZ2VyIn0.RIOeZ4jAIekXWWg87pV4djW3AngAivI4HGlUFgz1hrF45Z2cmFpLiLsO1EImFuEGcitoCdxccK_3vit_KJGff2071O8Rlmf_jYIaBqZpt__gZQYBJwJapQGXgBpCVgdr2H0W0PEdF3Zm3WEIpm_1QsBhiT57L3I9Z5hzZXgOO3fOx6FIBtkI1ciU3_5H8AFApZI3eVKTeIdJn90bdxM7F8KOnv6wx7ebpz0xGUBuHZFxXpDB40I16rC4-AYHYJ7jU5L67cJSb7b697dSyJK-JD3m5X2q9rQhiDU-tMhoeLHPajlqRU80hDiE5QlM12fBXsn6b8FiatQM3YSWbf920w' https://e2e-msov-controller-manager-metrics-service.e2e-msov-system.svc.cluster.local:8443/metrics"], "securityContext": { "allowPrivilegeEscalation": false, "capabilities": { "drop": ["ALL"] }, "runAsNonRoot": true, "runAsUser": 1000, "seccompProfile": { "type": "RuntimeDefault" } } }], "serviceAccountName": "e2e-msov-controller-manager" } } STEP: validating that the curl pod is running as expected - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:517 @ 06/02/25 08:19:41.268 running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} running: kubectl -n e2e-msov-system get pods curl -o jsonpath={.status.phase} [FAILED] Timed out after 240.000s. The function passed to Eventually failed at /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:524 with: curl pod in Failed status Expected : Failed to equal : Succeeded In [It] at: /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:526 @ 06/02/25 08:23:41.269 < Exit [It] should generate a runnable project with the manager running as restricted and without webhooks - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:99 @ 06/02/25 08:23:41.269 (5m18.502s) > Enter [AfterEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:59 @ 06/02/25 08:23:41.269 STEP: By removing restricted namespace label - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:60 @ 06/02/25 08:23:41.269 running: kubectl label ns e2e-msov-system pod-security.kubernetes.io/enforce- STEP: clean up API objects created during the test - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:63 @ 06/02/25 08:23:41.358 running: make undeploy STEP: removing controller image and working dir - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:66 @ 06/02/25 08:23:47.814 running: docker rmi -f e2e-test/controller-manager:msov < Exit [AfterEach] plugin go/v4 - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:59 @ 06/02/25 08:23:47.877 (6.608s) • [FAILED] [325.186 seconds] kubebuilder /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:48 plugin go/v4 /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:49 [It] should generate a runnable project with the manager running as restricted and without webhooks /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:99 ------------------------------ [AfterSuite]  /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e_suite_test.go:54 > Enter [AfterSuite] TOP-LEVEL - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e_suite_test.go:54 @ 06/02/25 08:23:47.877 running: kubectl version -o json cleaning up tools preparing testing directory: /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e-mrlc STEP: uninstalling the Prometheus manager bundle - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e_suite_test.go:59 @ 06/02/25 08:23:47.957 running: kubectl delete -f https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.77.1/bundle.yaml STEP: uninstalling the cert-manager bundle - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e_suite_test.go:62 @ 06/02/25 08:23:51.079 running: kubectl delete -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.3/cert-manager.yaml < Exit [AfterSuite] TOP-LEVEL - /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/e2e_suite_test.go:54 @ 06/02/25 08:24:04.237 (16.36s) [AfterSuite] PASSED [16.360 seconds] ------------------------------ Summarizing 2 Failures: [FAIL] kubebuilder plugin go/v4 [It] should generate a runnable project with metrics protected by network policies /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:526 [FAIL] kubebuilder plugin go/v4 [It] should generate a runnable project with the manager running as restricted and without webhooks /home/prow/go/src/sigs.k8s.io/kubebuilder/test/e2e/v4/plugin_cluster_test.go:526 Ran 7 of 7 Specs in 1400.698 seconds FAIL! -- 5 Passed | 2 Failed | 0 Pending | 0 Skipped --- FAIL: TestE2E (1400.70s) FAIL FAIL sigs.k8s.io/kubebuilder/v4/test/e2e/v4 1400.708s FAIL Deleting cluster "kind" ... Deleted nodes: ["kind-control-plane"] + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Waiting 30 seconds for pods stopped with terminationGracePeriod:30 Cleaning up after docker Waiting for docker to stop for 30 seconds Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker.