INFO[2025-08-18T00:17:41Z] ci-operator version v20250814-e1c78f45b INFO[2025-08-18T00:17:41Z] Loading configuration from https://config.ci.openshift.org for stolostron/multicluster-global-hub@main INFO[2025-08-18T00:17:41Z] Resolved source https://github.com/stolostron/multicluster-global-hub to main@94f583ec, merging: #1887 e3d43999 @dependabot[bot] INFO[2025-08-18T00:17:41Z] Loading information from https://config.ci.openshift.org for integrated stream ocp/4.18 INFO[2025-08-18T00:17:41Z] Loading information from https://config.ci.openshift.org for integrated stream ocp/4.18 INFO[2025-08-18T00:17:41Z] Building release initial from a snapshot of ocp/4.18 INFO[2025-08-18T00:17:41Z] Building release latest from a snapshot of ocp/4.18 INFO[2025-08-18T00:17:41Z] Using namespace https://console-openshift-console.apps.build11.ci.devcluster.openshift.com/k8s/cluster/projects/ci-op-yctml9n0 INFO[2025-08-18T00:17:41Z] Setting arch for src arch=amd64 reasons=test-integration INFO[2025-08-18T00:17:41Z] Running [input:root], src, test-integration INFO[2025-08-18T00:17:42Z] Tagging stolostron/builder:go1.24-linux into pipeline:root. INFO[2025-08-18T00:17:47Z] Building src INFO[2025-08-18T00:17:47Z] Created build "src-amd64" INFO[2025-08-18T00:21:35Z] Build src-amd64 succeeded after 3m48s INFO[2025-08-18T00:21:36Z] Retrieving digests of member images INFO[2025-08-18T00:21:37Z] Image ci-op-yctml9n0/pipeline:src created digest=sha256:64606f6b0a9b6bf68c5c5de3b1dc82c9a05c5bb75ba29559f7662133a6e15d5e for-build=src INFO[2025-08-18T00:21:37Z] Executing test test-integration INFO[2025-08-18T00:43:43Z] Logs for container test in pod test-integration: INFO[2025-08-18T00:43:43Z] GOBIN=/tmp/cr-tests-bin go install sigs.k8s.io/controller-runtime/tools/setup-envtest@release-0.20 go: downloading sigs.k8s.io/controller-runtime v0.20.5-0.20250517180713-32e5e9e948a5 go: downloading sigs.k8s.io/controller-runtime/tools/setup-envtest v0.0.0-20250517180713-32e5e9e948a5 go: downloading github.com/spf13/afero v1.12.0 go: downloading go.uber.org/zap v1.27.0 go: downloading github.com/go-logr/zapr v1.3.0 go: downloading github.com/go-logr/logr v1.4.2 go: downloading github.com/spf13/pflag v1.0.6 go: downloading sigs.k8s.io/yaml v1.4.0 go: downloading golang.org/x/text v0.21.0 go: downloading go.uber.org/multierr v1.10.0 KUBEBUILDER_ASSETS="/tmp/.local/share/kubebuilder-envtest/k8s/1.33.0-linux-amd64" go test -v `go list ./test/integration/...` go: downloading github.com/operator-framework/api v0.33.0 go: downloading k8s.io/apimachinery v0.33.2 go: downloading sigs.k8s.io/controller-runtime v0.19.1 go: downloading github.com/fergusstrange/embedded-postgres v1.31.0 go: downloading github.com/lib/pq v1.10.9 go: downloading k8s.io/api v0.33.2 go: downloading github.com/jackc/pgx/v5 v5.7.5 go: downloading gorm.io/driver/postgres v1.6.0 go: downloading gorm.io/gorm v1.30.1 go: downloading k8s.io/client-go v0.33.2 go: downloading github.com/go-logr/logr v1.4.3 go: downloading github.com/RedHatInsights/strimzi-client-go v0.40.0 go: downloading github.com/cloudevents/sdk-go/v2 v2.16.1 go: downloading github.com/deckarep/golang-set v1.8.0 go: downloading github.com/openshift/api v0.0.0-20250220103441-744790f2cff7 go: downloading github.com/stolostron/multiclusterhub-operator v0.0.0-20250415191038-1e368a726d8b go: downloading open-cluster-management.io/api v1.0.0 go: downloading open-cluster-management.io/governance-policy-propagator v0.16.0 go: downloading go.uber.org/multierr v1.11.0 go: downloading github.com/jinzhu/now v1.1.5 go: downloading github.com/gogo/protobuf v1.3.2 go: downloading k8s.io/utils v0.0.0-20250604170112-4c0f3b243397 go: downloading sigs.k8s.io/randfill v1.0.0 go: downloading k8s.io/klog/v2 v2.130.1 go: downloading sigs.k8s.io/structured-merge-diff/v4 v4.7.0 go: downloading github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8 go: downloading github.com/jackc/puddle/v2 v2.2.2 go: downloading github.com/jackc/pgpassfile v1.0.0 go: downloading github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 go: downloading golang.org/x/crypto v0.39.0 go: downloading golang.org/x/text v0.26.0 go: downloading github.com/jinzhu/inflection v1.0.0 go: downloading k8s.io/apiextensions-apiserver v0.33.2 go: downloading gopkg.in/inf.v0 v0.9.1 go: downloading sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 go: downloading github.com/sirupsen/logrus v1.9.3 go: downloading github.com/evanphx/json-patch/v5 v5.9.11 go: downloading k8s.io/klog v1.0.0 go: downloading github.com/google/gnostic-models v0.6.9 go: downloading google.golang.org/protobuf v1.36.6 go: downloading github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 go: downloading golang.org/x/net v0.41.0 go: downloading golang.org/x/time v0.12.0 go: downloading github.com/google/uuid v1.6.0 go: downloading github.com/json-iterator/go v1.1.12 go: downloading golang.org/x/sync v0.15.0 go: downloading github.com/evanphx/json-patch v5.9.11+incompatible go: downloading github.com/fxamacker/cbor/v2 v2.8.0 go: downloading gomodules.xyz/jsonpatch/v2 v2.4.0 go: downloading github.com/blang/semver/v4 v4.0.0 go: downloading k8s.io/kube-openapi v0.0.0-20250610211856-8b98d1ed966a go: downloading golang.org/x/sys v0.33.0 go: downloading golang.org/x/term v0.32.0 go: downloading golang.org/x/oauth2 v0.30.0 go: downloading sigs.k8s.io/yaml v1.5.0 go: downloading github.com/spf13/pflag v1.0.7 go: downloading github.com/fsnotify/fsnotify v1.8.0 go: downloading github.com/prometheus/client_golang v1.22.0 go: downloading github.com/google/go-cmp v0.7.0 go: downloading golang.org/x/exp v0.0.0-20250620022241-b7579e27df2b go: downloading gopkg.in/evanphx/json-patch.v4 v4.12.0 go: downloading gopkg.in/yaml.v3 v3.0.1 go: downloading github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc go: downloading github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd go: downloading github.com/modern-go/reflect2 v1.0.2 go: downloading github.com/x448/float16 v0.8.4 go: downloading go.yaml.in/yaml/v2 v2.4.2 go: downloading github.com/pkg/errors v0.9.1 go: downloading github.com/go-openapi/jsonreference v0.21.0 go: downloading github.com/go-openapi/swag v0.23.1 go: downloading github.com/emicklei/go-restful/v3 v3.12.1 go: downloading github.com/go-openapi/jsonpointer v0.21.1 go: downloading github.com/cespare/xxhash/v2 v2.3.0 go: downloading github.com/prometheus/client_model v0.6.2 go: downloading github.com/prometheus/common v0.65.0 go: downloading github.com/beorn7/perks v1.0.1 go: downloading github.com/prometheus/procfs v0.16.1 go: downloading github.com/mailru/easyjson v0.9.0 go: downloading github.com/josharian/intern v1.0.0 go: downloading github.com/onsi/ginkgo/v2 v2.23.4 go: downloading github.com/onsi/gomega v1.37.0 go: downloading github.com/authzed/spicedb-operator v1.20.1 go: downloading github.com/cloudflare/cfssl v1.6.5 go: downloading github.com/crunchydata/postgres-operator v1.3.3-0.20230629151007-94ebcf2df74d go: downloading github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.76.0 go: downloading github.com/stolostron/klusterlet-addon-controller v0.0.0-20250224012200-769f091c0e95 go: downloading github.com/gin-gonic/gin v1.10.1 go: downloading open-cluster-management.io/multicloud-operators-subscription v0.16.0 go: downloading github.com/go-kratos/kratos/v2 v2.8.4 go: downloading github.com/project-kessel/inventory-client-go v0.0.0-20240927104800-2c124202b25f go: downloading github.com/project-kessel/inventory-api v0.0.0-20241213103024-feb181fd66c1 go: downloading github.com/go-co-op/gocron v1.37.0 go: downloading open-cluster-management.io/multicloud-operators-channel v0.16.0 go: downloading sigs.k8s.io/application v0.8.3 go: downloading github.com/stolostron/cluster-lifecycle-api v0.0.0-20250429012240-363012f4f827 go: downloading open-cluster-management.io/managed-serviceaccount v0.8.0 go: downloading gorm.io/datatypes v1.2.6 go: downloading github.com/cloudevents/sdk-go/protocol/kafka_confluent/v2 v2.0.0-20250811193955-d8449ff1e35a go: downloading github.com/confluentinc/confluent-kafka-go/v2 v2.11.0 go: downloading sigs.k8s.io/kustomize/kyaml v0.20.1 go: downloading github.com/openshift/client-go v0.0.0-20250131180035-f7ec47e2d87a go: downloading open-cluster-management.io/addon-framework v0.12.1-0.20250422083707-fb6b4ebb66b5 go: downloading gopkg.in/ini.v1 v1.67.0 go: downloading gopkg.in/yaml.v2 v2.4.0 go: downloading github.com/IBM/sarama v1.45.2 go: downloading k8s.io/kube-aggregator v0.32.6 go: downloading github.com/authzed/grpcutil v0.0.0-20240123194739-2ea1e3d2d98b go: downloading github.com/golang-jwt/jwt/v5 v5.2.2 go: downloading github.com/patrickmn/go-cache v2.1.0+incompatible go: downloading google.golang.org/grpc v1.73.0 go: downloading github.com/robfig/cron/v3 v3.0.1 go: downloading go.uber.org/atomic v1.11.0 go: downloading buf.build/gen/go/bufbuild/protovalidate/protocolbuffers/go v1.35.2-20240920164238-5a7b106cbb87.1 go: downloading google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822 go: downloading github.com/gin-contrib/sse v1.1.0 go: downloading github.com/mattn/go-isatty v0.0.20 go: downloading gorm.io/driver/mysql v1.5.6 go: downloading github.com/openshift/library-go v0.0.0-20250228164547-bad2d1bf3a37 go: downloading github.com/certifi/gocertifi v0.0.0-20210507211836-431795d63e8d go: downloading github.com/grpc-ecosystem/go-grpc-middleware v1.4.0 go: downloading github.com/stretchr/testify v1.10.0 go: downloading github.com/go-kratos/aegis v0.2.0 go: downloading github.com/gorilla/mux v1.8.1 go: downloading open-cluster-management.io/sdk-go v0.16.0 go: downloading github.com/fatih/structs v1.1.0 go: downloading helm.sh/helm/v3 v3.18.4 go: downloading github.com/go-playground/validator/v10 v10.26.0 go: downloading github.com/pelletier/go-toml/v2 v2.2.4 go: downloading github.com/ugorji/go/codec v1.2.12 go: downloading github.com/go-sql-driver/mysql v1.8.1 go: downloading k8s.io/apiserver v0.33.2 go: downloading github.com/zmap/zlint/v3 v3.5.0 go: downloading github.com/google/certificate-transparency-go v1.1.7 go: downloading github.com/zmap/zcrypto v0.0.0-20230310154051-c8b263fd8300 go: downloading github.com/pelletier/go-toml v1.9.5 go: downloading github.com/eapache/go-resiliency v1.7.0 go: downloading github.com/eapache/go-xerial-snappy v0.0.0-20230731223053-c322873962e3 go: downloading github.com/eapache/queue v1.1.0 go: downloading github.com/hashicorp/go-multierror v1.1.1 go: downloading github.com/jcmturner/gofork v1.7.6 go: downloading github.com/jcmturner/gokrb5/v8 v8.4.4 go: downloading github.com/klauspost/compress v1.18.0 go: downloading github.com/pierrec/lz4/v4 v4.1.22 go: downloading github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 go: downloading github.com/stolostron/multicloud-operators-foundation v0.0.0-20241223014534-09421f48bba2 go: downloading go.yaml.in/yaml/v3 v3.0.3 go: downloading github.com/cenkalti/backoff/v4 v4.3.0 go: downloading google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 go: downloading github.com/go-playground/form/v4 v4.2.1 go: downloading github.com/jmoiron/sqlx v1.4.0 go: downloading filippo.io/edwards25519 v1.1.0 go: downloading github.com/gabriel-vasile/mimetype v1.4.9 go: downloading github.com/go-playground/universal-translator v0.18.1 go: downloading github.com/leodido/go-urn v1.4.0 go: downloading github.com/golang/snappy v0.0.4 go: downloading github.com/hashicorp/errwrap v1.1.0 go: downloading github.com/Masterminds/semver/v3 v3.3.1 go: downloading github.com/cyphar/filepath-securejoin v0.4.1 go: downloading github.com/mitchellh/copystructure v1.2.0 go: downloading github.com/xeipuuv/gojsonschema v1.2.0 go: downloading github.com/BurntSushi/toml v1.5.0 go: downloading github.com/Masterminds/sprig/v3 v3.3.0 go: downloading github.com/gobwas/glob v0.2.3 go: downloading github.com/jcmturner/dnsutils/v2 v2.0.0 go: downloading github.com/hashicorp/go-uuid v1.0.3 go: downloading github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 go: downloading github.com/go-errors/errors v1.5.1 go: downloading github.com/weppos/publicsuffix-go v0.30.0 go: downloading sigs.k8s.io/kube-storage-version-migrator v0.0.6-0.20230721195810-5c8923c5ff96 go: downloading github.com/go-playground/locales v0.14.1 go: downloading github.com/mitchellh/reflectwalk v1.0.2 go: downloading dario.cat/mergo v1.0.1 go: downloading github.com/Masterminds/goutils v1.1.1 go: downloading github.com/huandu/xstrings v1.5.0 go: downloading github.com/shopspring/decimal v1.4.0 go: downloading github.com/spf13/cast v1.7.0 go: downloading github.com/jcmturner/rpc/v2 v2.0.3 go: downloading github.com/jcmturner/aescts/v2 v2.0.0 go: downloading github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 go: downloading github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb go: downloading k8s.io/component-base v0.33.2 go: downloading go.opentelemetry.io/otel/trace v1.36.0 go: downloading go.opentelemetry.io/otel v1.36.0 === RUN TestIntegration Running Suite: Controller Integration Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/agent/controller ===================================================================================================================================== Random Seed: 1755477722 Will run 5 of 5 specs 2025-08-18T00:42:09.656Z INFO controller/controller.go:175 Starting EventSource {"controller": "hubclusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "source": "kind source: *v1alpha1.ClusterClaim"} 2025-08-18T00:42:09.656Z INFO controller/controller.go:183 Starting Controller {"controller": "hubclusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim"} 2025-08-18T00:42:09.656Z INFO controller/controller.go:175 Starting EventSource {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "source": "kind source: *v1alpha1.ClusterClaim"} 2025-08-18T00:42:09.656Z INFO controller/controller.go:175 Starting EventSource {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "source": "kind source: *v1.MultiClusterHub"} 2025-08-18T00:42:09.656Z INFO controller/controller.go:183 Starting Controller {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim"} 2025-08-18T00:42:09.766Z INFO controller/controller.go:217 Starting workers {"controller": "hubclusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "worker count": 1} 2025-08-18T00:42:09.766Z INFO controller/controller.go:217 Starting workers {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "worker count": 1} 2025-08-18T00:42:11.701Z INFO controllers/clusterclaim_hub_controller.go:33 NamespacedName: /test2 2025-08-18T00:42:11.816Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "e4940c16-9820-4303-9d11-a589186f3ca3", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc000bff710}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc000bff710}, {{{0x0, 0x0}, {0xc00091ef40, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc000bff680?, {0x29f75c0?, 0xc000bff710?}, {{{0x0?, 0x0?}, {0xc00091ef40?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc00091ef40, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:11.816Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "e4940c16-9820-4303-9d11-a589186f3ca3", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:11.824Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "451fb75d-4152-4a0a-8719-9c4283323d4b", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc000bffd10}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc000bffd10}, {{{0x0, 0x0}, {0xc00091ef40, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc000bffc80?, {0x29f75c0?, 0xc000bffd10?}, {{{0x0?, 0x0?}, {0xc00091ef40?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc00091ef40, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:11.825Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "451fb75d-4152-4a0a-8719-9c4283323d4b", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:11.838Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "e762a33a-bb7f-441d-8262-e51493d21bdc", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc000d04240}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc000d04240}, {{{0x0, 0x0}, {0xc00091ef40, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc000d041b0?, {0x29f75c0?, 0xc000d04240?}, {{{0x0?, 0x0?}, {0xc00091ef40?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc00091ef40, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:11.838Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "e762a33a-bb7f-441d-8262-e51493d21bdc", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:11.863Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "bf1b8cdb-770b-48cd-9831-47fb2b0ac79f", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc000d046f0}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc000d046f0}, {{{0x0, 0x0}, {0xc00091ef40, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc000d04660?, {0x29f75c0?, 0xc000d046f0?}, {{{0x0?, 0x0?}, {0xc00091ef40?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc00091ef40, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:11.863Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "bf1b8cdb-770b-48cd-9831-47fb2b0ac79f", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:42:11.907Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "1976e7e4-e083-4585-bbaf-0e21c0c83230", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc000cf04b0}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc000cf04b0}, {{{0x0, 0x0}, {0xc00091ef40, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc000cf0420?, {0x29f75c0?, 0xc000cf04b0?}, {{{0x0?, 0x0?}, {0xc00091ef40?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc00091ef40, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:11.907Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "1976e7e4-e083-4585-bbaf-0e21c0c83230", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:11.936Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "e2000007-8f6f-4e50-a9fe-9be700ea363f", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc000cf0db0}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc000cf0db0}, {{{0x0, 0x0}, {0xc0008fd840, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc000cf0d20?, {0x29f75c0?, 0xc000cf0db0?}, {{{0x0?, 0x0?}, {0xc0008fd840?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc0008fd840, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:11.936Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "e2000007-8f6f-4e50-a9fe-9be700ea363f", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:42:11.941Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "92c5c120-0021-4345-8d9c-313edfb7d4d9", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc000cf18f0}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc000cf18f0}, {{{0x0, 0x0}, {0xc0008fd840, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc000cf1860?, {0x29f75c0?, 0xc000cf18f0?}, {{{0x0?, 0x0?}, {0xc0008fd840?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc0008fd840, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:11.941Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "92c5c120-0021-4345-8d9c-313edfb7d4d9", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:11.955Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "0bd95065-5889-44a1-a035-b80e07f7230f", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc000d78f90}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc000d78f90}, {{{0x0, 0x0}, {0xc00091f4e0, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc000d78f00?, {0x29f75c0?, 0xc000d78f90?}, {{{0x0?, 0x0?}, {0xc00091f4e0?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc00091f4e0, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:11.955Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "0bd95065-5889-44a1-a035-b80e07f7230f", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:11.975Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "4fdf1415-823d-4f50-8dda-5a79d3eea60a", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc000e80ba0}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc000e80ba0}, {{{0x0, 0x0}, {0xc00091f4e0, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc000e80b10?, {0x29f75c0?, 0xc000e80ba0?}, {{{0x0?, 0x0?}, {0xc00091f4e0?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc00091f4e0, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:11.975Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "4fdf1415-823d-4f50-8dda-5a79d3eea60a", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.316Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "abec3d94-d1ac-4d67-ac28-e9179200758b", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc000e813b0}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc000e813b0}, {{{0xc000c5c5e6, 0x7}, {0xc000c5c600, 0xf}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc000e81320?, {0x29f75c0?, 0xc000e813b0?}, {{{0xc000c5c5e6?, 0x0?}, {0xc000c5c600?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0xc000c5c5e6, 0x7}, {0xc000c5c600, 0xf}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.316Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "abec3d94-d1ac-4d67-ac28-e9179200758b", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:42:12.325Z INFO controllers/clusterclaim_hub_controller.go:33 NamespacedName: /version.open-cluster-management.io 2025-08-18T00:42:12.329Z ERROR controllers/clusterclaim_version_controller.go:35 Operation cannot be fulfilled on clusterclaims.cluster.open-cluster-management.io "hub.open-cluster-management.io": StorageError: invalid object, Code: 4, Key: /registry/cluster.open-cluster-management.io/clusterclaims/hub.open-cluster-management.io, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 8365e413-6468-4800-9a81-07c82b1fbc24, UID in object meta: failed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.329Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "3c26dc85-c606-4acf-8db1-451cdea73b91", "error": "Operation cannot be fulfilled on clusterclaims.cluster.open-cluster-management.io \"hub.open-cluster-management.io\": StorageError: invalid object, Code: 4, Key: /registry/cluster.open-cluster-management.io/clusterclaims/hub.open-cluster-management.io, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 8365e413-6468-4800-9a81-07c82b1fbc24, UID in object meta: "} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:42:12.339Z ERROR controllers/clusterclaim_version_controller.go:35 clusterclaims.cluster.open-cluster-management.io "hub.open-cluster-management.io" already existsfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.339Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "f497301f-4dfd-4188-b272-c31c8f603627", "error": "clusterclaims.cluster.open-cluster-management.io \"hub.open-cluster-management.io\" already exists"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.344Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "5c45e336-dd42-48ca-8caf-8405b2eb1210", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc001223890}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc001223890}, {{{0x0, 0x0}, {0xc00104f350, 0x22}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001223800?, {0x29f75c0?, 0xc001223890?}, {{{0x0?, 0x0?}, {0xc00104f350?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc00104f350, 0x22}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.344Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "5c45e336-dd42-48ca-8caf-8405b2eb1210", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.348Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "d277af48-baa9-460b-a429-1bd4258be249", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc0015e2000}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc0015e2000}, {{{0x0, 0x0}, {0xc00037c620, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc0013edef0?, {0x29f75c0?, 0xc0015e2000?}, {{{0x0?, 0x0?}, {0xc00037c620?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc00037c620, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.348Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "d277af48-baa9-460b-a429-1bd4258be249", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.355Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "611be4f4-a619-4c7f-aeca-bbb6ad86d7dc", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc001223f50}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc001223f50}, {{{0x0, 0x0}, {0xc00104f350, 0x22}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001223ec0?, {0x29f75c0?, 0xc001223f50?}, {{{0x0?, 0x0?}, {0xc00104f350?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc00104f350, 0x22}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.356Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "611be4f4-a619-4c7f-aeca-bbb6ad86d7dc", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.358Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "24046b85-8488-4c68-b67a-8d0421f10ede", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc0012ba690}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc0012ba690}, {{{0xc000c5c5e6, 0x7}, {0xc000c5c600, 0xf}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc0012ba600?, {0x29f75c0?, 0xc0012ba690?}, {{{0xc000c5c5e6?, 0x0?}, {0xc000c5c600?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0xc000c5c5e6, 0x7}, {0xc000c5c600, 0xf}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.358Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "24046b85-8488-4c68-b67a-8d0421f10ede", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.361Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "f3cd01cf-58e1-4935-8ebc-6374b8bec46c", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc0015e28a0}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc0015e28a0}, {{{0x0, 0x0}, {0xc00037c620, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc0015e2810?, {0x29f75c0?, 0xc0015e28a0?}, {{{0x0?, 0x0?}, {0xc00037c620?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc00037c620, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.361Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "f3cd01cf-58e1-4935-8ebc-6374b8bec46c", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.370Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "01271d8c-e75a-4d0e-895a-f54c4c1b437c", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc0015e2d50}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc0015e2d50}, {{{0x0, 0x0}, {0xc00104f350, 0x22}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc0015e2cc0?, {0x29f75c0?, 0xc0015e2d50?}, {{{0x0?, 0x0?}, {0xc00104f350?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc00104f350, 0x22}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.370Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "01271d8c-e75a-4d0e-895a-f54c4c1b437c", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.383Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "27acba61-3ee6-4959-9756-59c5df97f948", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc0012bade0}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc0012bade0}, {{{0xc000c5c5e6, 0x7}, {0xc000c5c600, 0xf}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc0012bad50?, {0x29f75c0?, 0xc0012bade0?}, {{{0xc000c5c5e6?, 0x0?}, {0xc000c5c600?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0xc000c5c5e6, 0x7}, {0xc000c5c600, 0xf}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.383Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "27acba61-3ee6-4959-9756-59c5df97f948", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.386Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "e67fe655-09d2-4876-988f-e08b39b2e6a2", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc0015e34a0}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc0015e34a0}, {{{0x0, 0x0}, {0xc00037c620, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc0015e3410?, {0x29f75c0?, 0xc0015e34a0?}, {{{0x0?, 0x0?}, {0xc00037c620?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc00037c620, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.386Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "e67fe655-09d2-4876-988f-e08b39b2e6a2", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.397Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "1ef736f4-879d-4925-9f86-a5b8dab6073d", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc0015e3950}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc0015e3950}, {{{0x0, 0x0}, {0xc00104f350, 0x22}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc0015e38c0?, {0x29f75c0?, 0xc0015e3950?}, {{{0x0?, 0x0?}, {0xc00104f350?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc00104f350, 0x22}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.398Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "1ef736f4-879d-4925-9f86-a5b8dab6073d", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.430Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "69e0a39d-03b7-4b2e-bd9b-54fe93e0a1fa", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc0012bb5f0}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc0012bb5f0}, {{{0xc000c5c5e6, 0x7}, {0xc000c5c600, 0xf}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc0012bb560?, {0x29f75c0?, 0xc0012bb5f0?}, {{{0xc000c5c5e6?, 0x0?}, {0xc000c5c600?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0xc000c5c5e6, 0x7}, {0xc000c5c600, 0xf}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.430Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "69e0a39d-03b7-4b2e-bd9b-54fe93e0a1fa", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.433Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "4b14e111-cbec-4be2-b913-8fef25a94606", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc00184a1e0}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc00184a1e0}, {{{0x0, 0x0}, {0xc00037c620, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc00184a150?, {0x29f75c0?, 0xc00184a1e0?}, {{{0x0?, 0x0?}, {0xc00037c620?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc00037c620, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.433Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "4b14e111-cbec-4be2-b913-8fef25a94606", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:42:12.457Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "8d5eb414-18f2-4aec-a0dd-abe0fd9e5e77", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 408 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc0012bbc80}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc00126fa20, {0x29f75c0, 0xc0012bbc80}, {{{0x0, 0x0}, {0xc00104f350, 0x22}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc0012bbbf0?, {0x29f75c0?, 0xc0012bbc80?}, {{{0x0?, 0x0?}, {0xc00104f350?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc000937090}, {{{0x0, 0x0}, {0xc00104f350, 0x22}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc000937090})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 339\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.457Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "8d5eb414-18f2-4aec-a0dd-abe0fd9e5e77", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.512Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.512Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "e5793cca-cd1d-45a2-bb46-4461498432d3", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.514Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.514Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "c1f8964d-117a-418e-a1d3-c24af7c9d39d", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.539Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.539Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "be5e6666-4f76-4b39-968d-4bc4dc7c52b7", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.679Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.679Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "1e1b5acd-83f6-47b2-b4cf-49429ae21c76", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.683Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.683Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "d7f5886c-62d1-4ef3-a232-902aeecd39ac", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.699Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:12.699Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "ebb04a8f-ffcc-46eb-b417-dfd9e04ca894", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:13.013Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:13.013Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "33463def-1ded-4e23-ae4b-c5e2c59cf8a2", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:13.013Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:13.013Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "f93fe46b-2c23-495b-9af4-ec42e99459d5", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:13.021Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:13.025Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "33f1430d-b871-4ddc-8d06-8c63e04a0ffd", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:13.655Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:13.655Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "a9fd4103-9300-4584-b1d2-f8cb900d06c0", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:13.656Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:13.656Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "83e6798b-97fb-46c8-82e5-ac2cda958576", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:13.666Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:13.666Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "1babb013-0efe-4bb6-93eb-b3516c9dde4b", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:14.937Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:14.937Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "54c0280b-db5c-46b7-8678-d252b773ee99", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:14.937Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:14.937Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "a25a76e2-d24b-4482-9b49-4a948a68ad18", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:14.951Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:14.951Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "335dcda8-cebd-4f34-ac37-0138de6b0709", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:17.498Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:17.498Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "13cad95f-bcfc-4d71-95ee-6f5997016740", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:17.499Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:17.499Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "d57461e4-4e7d-4ee9-a8aa-ad232b26ecf3", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:17.513Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:17.513Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "58321e80-186e-4b94-9118-8654dedb86c6", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:22.619Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:22.619Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "123144ed-817c-4f58-bc59-9899f2b6ea81", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:22.620Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:22.620Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "632a780d-44be-4a83-861b-d2f633bf7c1a", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:22.633Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:22.633Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "adf3bfd9-a1d0-43a2-8d52-26c3da3a9244", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:32.877Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:32.877Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "3a559feb-c0e3-4579-bf7f-5eba1d6f0831", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:32.878Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:32.878Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "0a82f1cb-a37a-4b41-b384-36c94a9837a2", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:32.878Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:40193: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:32.878Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "d41511cd-b7e4-44aa-9114-81472aed7fe6", "error": "Put \"https://127.0.0.1:40193/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:40193: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 Ran 5 of 5 Specs in 34.139 seconds SUCCESS! -- 5 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestIntegration (34.14s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/agent/controller 34.245s === RUN TestMigration Running Suite: Agent Migration Integration Test Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/agent/migration ============================================================================================================================================== Random Seed: 1755477730 Will run 17 of 17 specs 2025-08-18T00:42:18.836Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:42:20.980Z INFO syncers/migration_from_syncer.go:131 migration Initializing started: migrationId=test-migration-123, clusters=[test-cluster-1] 2025-08-18T00:42:21.082Z INFO syncers/migration_from_syncer.go:218 bootstrap secret bootstrap-hub2 is unchanged 2025-08-18T00:42:21.301Z INFO syncers/migration_from_syncer.go:270 managed clusters test-cluster-1 is updated 2025-08-18T00:42:21.301Z INFO syncers/migration_from_syncer.go:140 migration Initializing completed: migrationId=test-migration-123 •2025-08-18T00:42:21.303Z INFO syncers/migration_from_syncer.go:131 migration Deploying started: migrationId=test-migration-123, clusters=[test-cluster-1] 2025-08-18T00:42:21.404Z INFO syncers/migration_from_syncer.go:189 deploying: attach clusters and addonConfigs into the event 2025-08-18T00:42:21.404Z INFO syncers/migration_from_syncer.go:140 migration Deploying completed: migrationId=test-migration-123 •2025-08-18T00:42:21.506Z INFO syncers/migration_from_syncer.go:131 migration Registering started: migrationId=test-migration-123, clusters=[test-cluster-1] 2025-08-18T00:42:21.506Z INFO syncers/migration_from_syncer.go:340 updating managedcluster test-cluster-1 to set HubAcceptsClient as false 2025-08-18T00:42:21.510Z INFO syncers/migration_from_syncer.go:140 migration Registering completed: migrationId=test-migration-123 •2025-08-18T00:42:21.510Z INFO syncers/migration_from_syncer.go:131 migration Cleaning started: migrationId=test-migration-123, clusters=[test-cluster-1] 2025-08-18T00:42:21.510Z INFO syncers/migration_to_syncer.go:831 deleting resource multicluster-global-hub/bootstrap-hub2 2025-08-18T00:42:21.613Z INFO syncers/migration_to_syncer.go:831 deleting resource /migration-hub2 2025-08-18T00:42:21.617Z INFO syncers/migration_from_syncer.go:161 cleaning up 1 managed clusters 2025-08-18T00:42:21.628Z INFO syncers/migration_from_syncer.go:679 deleted managed cluster test-cluster-1 2025-08-18T00:42:21.628Z INFO syncers/migration_from_syncer.go:140 migration Cleaning completed: migrationId=test-migration-123 •2025-08-18T00:42:22.650Z INFO syncers/migration_from_syncer.go:131 migration Rollbacking started: migrationId=test-migration-123, clusters=[test-cluster-1] 2025-08-18T00:42:22.650Z INFO syncers/migration_from_syncer.go:464 performing rollback for stage: Initializing 2025-08-18T00:42:22.650Z INFO syncers/migration_from_syncer.go:485 cleaning up bootstrap secret: test 2025-08-18T00:42:22.650Z INFO syncers/migration_from_syncer.go:489 successfully deleted bootstrap secret: test 2025-08-18T00:42:22.650Z INFO syncers/migration_from_syncer.go:498 cleaning up KlusterletConfig: migration-hub2 2025-08-18T00:42:22.650Z INFO syncers/migration_from_syncer.go:502 successfully deleted KlusterletConfig: migration-hub2 2025-08-18T00:42:22.650Z INFO syncers/migration_from_syncer.go:507 cleaning up annotations for managed cluster: test-cluster-1 2025-08-18T00:42:22.653Z INFO syncers/migration_from_syncer.go:553 successfully removed migration annotations from managed cluster: test-cluster-1 2025-08-18T00:42:22.653Z INFO syncers/migration_from_syncer.go:140 migration Rollbacking completed: migrationId=test-migration-123 •2025-08-18T00:42:23.658Z INFO syncers/migration_from_syncer.go:131 migration Rollbacking started: migrationId=test-migration-123, clusters=[test-cluster-1] 2025-08-18T00:42:23.658Z INFO syncers/migration_from_syncer.go:464 performing rollback for stage: Deploying 2025-08-18T00:42:23.658Z INFO syncers/migration_from_syncer.go:567 rollback deploying stage for clusters: [test-cluster-1] 2025-08-18T00:42:23.658Z INFO syncers/migration_from_syncer.go:498 cleaning up KlusterletConfig: migration-hub2 2025-08-18T00:42:23.658Z INFO syncers/migration_from_syncer.go:502 successfully deleted KlusterletConfig: migration-hub2 2025-08-18T00:42:23.658Z INFO syncers/migration_from_syncer.go:507 cleaning up annotations for managed cluster: test-cluster-1 2025-08-18T00:42:23.664Z INFO syncers/migration_from_syncer.go:553 successfully removed migration annotations from managed cluster: test-cluster-1 2025-08-18T00:42:23.664Z INFO syncers/migration_from_syncer.go:579 completed deploying stage rollback 2025-08-18T00:42:23.664Z INFO syncers/migration_from_syncer.go:140 migration Rollbacking completed: migrationId=test-migration-123 •2025-08-18T00:42:24.673Z INFO syncers/migration_from_syncer.go:131 migration Rollbacking started: migrationId=test-migration-123, clusters=[test-cluster-1] 2025-08-18T00:42:24.673Z INFO syncers/migration_from_syncer.go:464 performing rollback for stage: Registering 2025-08-18T00:42:24.673Z INFO syncers/migration_from_syncer.go:585 rollback registering stage for clusters: [test-cluster-1] 2025-08-18T00:42:24.673Z INFO syncers/migration_from_syncer.go:567 rollback deploying stage for clusters: [test-cluster-1] 2025-08-18T00:42:24.673Z INFO syncers/migration_from_syncer.go:498 cleaning up KlusterletConfig: migration-hub2 2025-08-18T00:42:24.673Z INFO syncers/migration_from_syncer.go:502 successfully deleted KlusterletConfig: migration-hub2 2025-08-18T00:42:24.673Z INFO syncers/migration_from_syncer.go:507 cleaning up annotations for managed cluster: test-cluster-1 2025-08-18T00:42:24.691Z INFO syncers/migration_from_syncer.go:553 successfully removed migration annotations from managed cluster: test-cluster-1 2025-08-18T00:42:24.691Z INFO syncers/migration_from_syncer.go:579 completed deploying stage rollback 2025-08-18T00:42:24.695Z INFO syncers/migration_from_syncer.go:140 migration Rollbacking completed: migrationId=test-migration-123 •2025-08-18T00:42:24.695Z INFO syncers/migration_from_syncer.go:131 migration Initializing started: migrationId=error-test-1, clusters=[test-cluster-1] 2025-08-18T00:42:24.695Z ERROR syncers/migration_from_syncer.go:135 migration Initializing failed: migrationId=error-test-1, error=bootstrap secret is nil when initializing github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers.(*MigrationSourceSyncer).executeStage /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers/migration_from_syncer.go:135 github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers.(*MigrationSourceSyncer).handleStage /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers/migration_from_syncer.go:112 github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers.(*MigrationSourceSyncer).Sync /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers/migration_from_syncer.go:88 github.com/stolostron/multicluster-global-hub/test/integration/agent/migration_test.init.func1.5.1 /go/src/github.com/stolostron/multicluster-global-hub/test/integration/agent/migration/migration_from_syncer_test.go:463 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/node.go:475 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/suite.go:894 •2025-08-18T00:42:24.713Z INFO syncers/migration_from_syncer.go:131 migration Initializing started: migrationId=error-test-2, clusters=[non-existent-cluster] 2025-08-18T00:42:24.713Z INFO syncers/migration_from_syncer.go:218 bootstrap secret bootstrap-hub2-test2 is unchanged 2025-08-18T00:42:24.724Z ERROR syncers/migration_from_syncer.go:135 migration Initializing failed: migrationId=error-test-2, error=ManagedCluster.cluster.open-cluster-management.io "non-existent-cluster" not found github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers.(*MigrationSourceSyncer).executeStage /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers/migration_from_syncer.go:135 github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers.(*MigrationSourceSyncer).handleStage /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers/migration_from_syncer.go:112 github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers.(*MigrationSourceSyncer).Sync /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers/migration_from_syncer.go:88 github.com/stolostron/multicluster-global-hub/test/integration/agent/migration_test.init.func1.5.2 /go/src/github.com/stolostron/multicluster-global-hub/test/integration/agent/migration/migration_from_syncer_test.go:508 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/node.go:475 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/suite.go:894 ••2025-08-18T00:42:24.780Z INFO syncers/migration_to_syncer.go:69 received migration event from global-hub 2025-08-18T00:42:24.780Z INFO syncers/migration_to_syncer.go:163 migration Initializing started: migrationId=test-migration-456, clusters=[] 2025-08-18T00:42:24.886Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.registrationConfiguration.autoApproveUsers" 2025-08-18T00:42:24.987Z INFO syncers/migration_to_syncer.go:425 creating migration clusterrole 2025-08-18T00:42:25.099Z INFO syncers/migration_to_syncer.go:541 creating subjectaccessreviews clusterrolebinding 2025-08-18T00:42:25.102Z INFO syncers/migration_to_syncer.go:483 creating agent registration clusterrolebindingclusterrolebindingglobal-hub-migration-migration-service-account-registration 2025-08-18T00:42:25.104Z INFO syncers/migration_to_syncer.go:171 migration Initializing completed: migrationId=test-migration-456 •2025-08-18T00:42:25.220Z INFO syncers/migration_to_syncer.go:69 received migration event from hub1 2025-08-18T00:42:25.220Z INFO syncers/migration_to_syncer.go:296 started the deploying: test-migration-456 2025-08-18T00:42:25.325Z INFO syncers/migration_to_syncer.go:315 finished syncing migration resources •2025-08-18T00:42:25.442Z INFO syncers/migration_to_syncer.go:69 received migration event from global-hub 2025-08-18T00:42:25.442Z INFO syncers/migration_to_syncer.go:163 migration Registering started: migrationId=test-migration-456, clusters=[test-cluster-2] 2025-08-18T00:42:25.542Z INFO syncers/migration_to_syncer.go:231 all 1 managed clusters are ready for migration 2025-08-18T00:42:25.543Z INFO syncers/migration_to_syncer.go:171 migration Registering completed: migrationId=test-migration-456 •2025-08-18T00:42:25.547Z INFO syncers/migration_to_syncer.go:69 received migration event from global-hub 2025-08-18T00:42:25.547Z INFO syncers/migration_to_syncer.go:163 migration Cleaning started: migrationId=test-migration-456, clusters=[] 2025-08-18T00:42:25.547Z INFO syncers/migration_to_syncer.go:633 auto approve user system:serviceaccount::migration-service-account not found in ClusterManager, no removal needed 2025-08-18T00:42:25.547Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-sar 2025-08-18T00:42:25.549Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-sar 2025-08-18T00:42:25.552Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-registration 2025-08-18T00:42:25.554Z INFO syncers/migration_to_syncer.go:171 migration Cleaning completed: migrationId=test-migration-456 •2025-08-18T00:42:25.664Z INFO syncers/migration_to_syncer.go:69 received migration event from global-hub 2025-08-18T00:42:25.664Z INFO syncers/migration_to_syncer.go:163 migration Rollbacking started: migrationId=test-migration-456, clusters=[] 2025-08-18T00:42:25.664Z INFO syncers/migration_to_syncer.go:641 performing rollback for stage: Initializing 2025-08-18T00:42:25.664Z INFO syncers/migration_to_syncer.go:633 auto approve user system:serviceaccount:open-cluster-management-agent-addon:migration-service-account not found in ClusterManager, no removal needed 2025-08-18T00:42:25.664Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-sar 2025-08-18T00:42:25.666Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-registration 2025-08-18T00:42:25.669Z INFO syncers/migration_to_syncer.go:171 migration Rollbacking completed: migrationId=test-migration-456 •2025-08-18T00:42:25.783Z INFO syncers/migration_to_syncer.go:69 received migration event from global-hub 2025-08-18T00:42:25.783Z INFO syncers/migration_to_syncer.go:163 migration Rollbacking started: migrationId=test-migration-456, clusters=[test-cluster-rollback-deploying] 2025-08-18T00:42:25.783Z INFO syncers/migration_to_syncer.go:641 performing rollback for stage: Deploying 2025-08-18T00:42:25.783Z INFO syncers/migration_to_syncer.go:681 rollback deploying stage for clusters: [test-cluster-rollback-deploying] 2025-08-18T00:42:25.786Z INFO syncers/migration_to_syncer.go:730 successfully removed managed cluster: test-cluster-rollback-deploying 2025-08-18T00:42:25.788Z INFO syncers/migration_to_syncer.go:750 successfully removed klusterlet addon config: test-cluster-rollback-deploying 2025-08-18T00:42:25.788Z INFO syncers/migration_to_syncer.go:633 auto approve user system:serviceaccount:open-cluster-management-agent-addon:migration-service-account not found in ClusterManager, no removal needed 2025-08-18T00:42:25.788Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-sar 2025-08-18T00:42:25.790Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-registration 2025-08-18T00:42:25.792Z INFO syncers/migration_to_syncer.go:704 completed deploying stage rollback 2025-08-18T00:42:25.792Z INFO syncers/migration_to_syncer.go:171 migration Rollbacking completed: migrationId=test-migration-456 •2025-08-18T00:42:25.797Z INFO syncers/migration_to_syncer.go:69 received migration event from global-hub 2025-08-18T00:42:25.797Z INFO syncers/migration_to_syncer.go:163 migration Rollbacking started: migrationId=test-migration-456, clusters=[test-cluster-rollback-registering] 2025-08-18T00:42:25.797Z INFO syncers/migration_to_syncer.go:641 performing rollback for stage: Registering 2025-08-18T00:42:25.797Z INFO syncers/migration_to_syncer.go:710 rollback registering stage for clusters: [test-cluster-rollback-registering] 2025-08-18T00:42:25.797Z INFO syncers/migration_to_syncer.go:681 rollback deploying stage for clusters: [test-cluster-rollback-registering] 2025-08-18T00:42:25.800Z INFO syncers/migration_to_syncer.go:730 successfully removed managed cluster: test-cluster-rollback-registering 2025-08-18T00:42:25.800Z INFO syncers/migration_to_syncer.go:740 klusterlet addon config test-cluster-rollback-registering not found, already removed 2025-08-18T00:42:25.800Z INFO syncers/migration_to_syncer.go:633 auto approve user system:serviceaccount:open-cluster-management-agent-addon:migration-service-account not found in ClusterManager, no removal needed 2025-08-18T00:42:25.800Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-sar 2025-08-18T00:42:25.802Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-registration 2025-08-18T00:42:25.805Z INFO syncers/migration_to_syncer.go:704 completed deploying stage rollback 2025-08-18T00:42:25.805Z INFO syncers/migration_to_syncer.go:171 migration Rollbacking completed: migrationId=test-migration-456 •2025-08-18T00:42:25.814Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:42:25.814Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:42:25.814Z INFO consumer/generic_consumer.go:179 receiver stopped 2025-08-18T00:42:25.814Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:42:25.814954 24326 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ManifestWork" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:25.814969 24326 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1alpha1.KlusterletConfig" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:25.815016 24326 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.KlusterletAddonConfig" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:25.815037 24326 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.Namespace" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:25.815079 24326 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ManagedCluster" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:25.815080 24326 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ClusterRoleBinding" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:25.815122 24326 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ClusterRole" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:25.815194 24326 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ClusterManager" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:42:25.815Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:42:25.815Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:42:25.815Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager Ran 17 of 17 Specs in 16.605 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestMigration (16.61s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/agent/migration 16.659s === RUN TestSyncers Running Suite: Spec Syncers Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/agent/spec ===================================================================================================================== Random Seed: 1755477730 Will run 5 of 5 specs 2025-08-18T00:42:21.075Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:42:21.075Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "Generic"} 2025-08-18T00:42:21.075Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "ManagedClustersLabels"} 2025-08-18T00:42:21.075Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "MigrationSourceHubCluster"} 2025-08-18T00:42:21.075Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "MigrationTargetHubCluster"} 2025-08-18T00:42:21.075Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "Resync"} 2025-08-18T00:42:21.075Z INFO spec/spec.go:55 added the spec controllers to manager 2025-08-18T00:42:21.087Z INFO workers/worker_pool.go:62 starting worker pool {"size": 2} 2025-08-18T00:42:21.088Z INFO spec/dispatcher.go:51 started dispatching received bundles... 2025-08-18T00:42:21.088Z INFO status-resyncer syncers/resync_syncer.go:43 resyncing event type {"eventType": "unknownMsg"} 2025-08-18T00:42:21.088Z INFO status-resyncer syncers/resync_syncer.go:48 event type unknownMsg is not registered for resync 2025-08-18T00:42:21.088Z INFO status-resyncer syncers/resync_syncer.go:43 resyncing event type {"eventType": "managedhub.info"} 2025-08-18T00:42:21.091Z INFO spec worker 1 workers/worker.go:46 start running worker {"Id: ": 1} 2025-08-18T00:42:21.098Z INFO spec worker 2 workers/worker.go:46 start running worker {"Id: ": 2} •create spec resource: { "kind": "Placement", "apiVersion": "cluster.open-cluster-management.io/v1beta1", "metadata": { "name": "test-placements", "namespace": "default", "uid": "74b6c622-4eb2-42ae-9a41-57e25713ce4e", "resourceVersion": "349", "generation": 1, "creationTimestamp": "2025-08-18T00:42:21Z", "annotations": { "global-hub.open-cluster-management.io/origin-ownerreference-uid": "ba34de6a-0af0-4c46-933f-1929b577e3fe" }, "managedFields": [ { "manager": "spec.test", "operation": "Update", "apiVersion": "cluster.open-cluster-management.io/v1beta1", "time": "2025-08-18T00:42:21Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { ".": {}, "f:global-hub.open-cluster-management.io/origin-ownerreference-uid": {} } }, "f:spec": { ".": {}, "f:clusterSets": {}, "f:prioritizerPolicy": { ".": {}, "f:mode": {} } } } } ] }, "spec": { "clusterSets": [ "cluster1", "cluster2" ], "prioritizerPolicy": { "mode": "Additive" }, "spreadPolicy": {}, "decisionStrategy": { "groupStrategy": { "clustersPerDecisionGroup": 0 } } }, "status": { "numberOfSelectedClusters": 0, "decisionGroups": null, "conditions": null } } •create spec resource: { "kind": "PlacementBinding", "apiVersion": "policy.open-cluster-management.io/v1", "metadata": { "name": "test-placementbinding", "namespace": "default", "uid": "24a2f18a-195c-450c-a5b4-69f503d86157", "resourceVersion": "350", "generation": 1, "creationTimestamp": "2025-08-18T00:42:21Z", "annotations": { "global-hub.open-cluster-management.io/origin-ownerreference-uid": "c22ad408-8d90-4338-9dc8-0da88f5829b2" }, "managedFields": [ { "manager": "spec.test", "operation": "Update", "apiVersion": "policy.open-cluster-management.io/v1", "time": "2025-08-18T00:42:21Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { ".": {}, "f:global-hub.open-cluster-management.io/origin-ownerreference-uid": {} } }, "f:placementRef": { ".": {}, "f:apiGroup": {}, "f:kind": {}, "f:name": {} }, "f:subjects": {} } } ] }, "placementRef": { "apiGroup": "cluster.open-cluster-management.io", "kind": "Placement", "name": "placement-policy-limitrange" }, "subjects": [ { "apiGroup": "policy.open-cluster-management.io", "kind": "Policy", "name": "policy-limitrange" } ], "bindingOverrides": {}, "status": {} } ••map[test:add vendor:OpenShift] •2025-08-18T00:42:21.713Z INFO consumer/generic_consumer.go:179 receiver stopped 2025-08-18T00:42:21.713Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:42:21.713Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:42:21.713Z INFO spec/dispatcher.go:56 stopped dispatching bundles 2025-08-18T00:42:21.713Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:42:21.713741 24347 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ManagedCluster" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:21.713820 24347 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:21.713880 24347 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.PlacementBinding" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:21.713928 24347 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1beta1.Placement" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:42:21.713Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:42:21.714Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:42:21.714Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager Ran 5 of 5 Specs in 12.379 seconds SUCCESS! -- 5 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestSyncers (12.38s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/agent/spec 12.417s === RUN TestControllers Running Suite: Status Controller Integration Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/agent/status ======================================================================================================================================== Random Seed: 1755477730 Will run 26 of 26 specs 2025-08-18T00:42:20.660Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:42:20.660Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:42:20.660Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:42:20.660Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:42:20.660Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:42:20.660Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:42:20.661Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:42:20.662Z INFO controller/controller.go:183 Starting Controller {"controller": "policy.localspec", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:42:20.662Z INFO controller/controller.go:217 Starting workers {"controller": "policy.localspec", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy", "worker count": 1} 2025-08-18T00:42:20.662Z INFO controller/controller.go:132 Starting EventSource {"controller": "policy.localspec", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy", "source": "kind source: *v1.Policy"} 2025-08-18T00:42:20.662Z INFO generic/periodic_syncer.go:69 Registered emitter for event type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec 2025-08-18T00:42:20.662Z INFO controller/controller.go:183 Starting Controller {"controller": "placementdecision", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "PlacementDecision"} 2025-08-18T00:42:20.662Z INFO controller/controller.go:217 Starting workers {"controller": "placementdecision", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "PlacementDecision", "worker count": 1} 2025-08-18T00:42:20.662Z INFO controller/controller.go:175 Starting EventSource {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement", "source": "kind source: *v1beta1.Placement"} 2025-08-18T00:42:20.662Z INFO controller/controller.go:183 Starting Controller {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement"} 2025-08-18T00:42:20.662Z INFO controller/controller.go:132 Starting EventSource {"controller": "placementdecision", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "PlacementDecision", "source": "kind source: *v1beta1.PlacementDecision"} 2025-08-18T00:42:20.662Z INFO controller/controller.go:175 Starting EventSource {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy", "source": "kind source: *v1.Policy"} 2025-08-18T00:42:20.662Z INFO controller/controller.go:183 Starting Controller {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:42:20.662Z INFO controller/controller.go:175 Starting EventSource {"controller": "configmap", "controllerGroup": "", "controllerKind": "ConfigMap", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:42:20.662Z INFO controller/controller.go:183 Starting Controller {"controller": "configmap", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:42:20.662Z INFO status.hub_cluster_heartbeat generic/multi_object_syncer.go:78 sync interval has been reset to 2s 2025-08-18T00:42:20.764Z INFO controller/controller.go:217 Starting workers {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy", "worker count": 1} 2025-08-18T00:42:20.771Z INFO controller/controller.go:217 Starting workers {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement", "worker count": 1} 2025-08-18T00:42:20.775Z INFO controller/controller.go:183 Starting Controller {"controller": "route", "controllerGroup": "route.openshift.io", "controllerKind": "Route"} 2025-08-18T00:42:20.775Z INFO controller/controller.go:217 Starting workers {"controller": "route", "controllerGroup": "route.openshift.io", "controllerKind": "Route", "worker count": 1} 2025-08-18T00:42:20.775Z INFO status.hub_cluster_info generic/multi_object_syncer.go:78 sync interval has been reset to 2s 2025-08-18T00:42:20.775Z INFO controller/controller.go:175 Starting EventSource {"controller": "clusterversion", "controllerGroup": "config.openshift.io", "controllerKind": "ClusterVersion", "source": "kind source: *v1.ClusterVersion"} 2025-08-18T00:42:20.775Z INFO controller/controller.go:183 Starting Controller {"controller": "clusterversion", "controllerGroup": "config.openshift.io", "controllerKind": "ClusterVersion"} 2025-08-18T00:42:20.775Z INFO controller/controller.go:132 Starting EventSource {"controller": "route", "controllerGroup": "route.openshift.io", "controllerKind": "Route", "source": "kind source: *v1.Route"} 2025-08-18T00:42:20.776Z INFO generic/periodic_syncer.go:69 Registered emitter for event type: io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster 2025-08-18T00:42:20.776Z INFO controller/controller.go:183 Starting Controller {"controller": "subscriptionreport", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "SubscriptionReport"} 2025-08-18T00:42:20.776Z INFO controller/controller.go:217 Starting workers {"controller": "subscriptionreport", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "SubscriptionReport", "worker count": 1} 2025-08-18T00:42:20.776Z INFO controller/controller.go:175 Starting EventSource {"controller": "managedcluster", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedCluster", "source": "kind source: *v1.ManagedCluster"} 2025-08-18T00:42:20.776Z INFO controller/controller.go:183 Starting Controller {"controller": "managedcluster", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedCluster"} 2025-08-18T00:42:20.776Z INFO controller/controller.go:132 Starting EventSource {"controller": "subscriptionreport", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "SubscriptionReport", "source": "kind source: *v1alpha1.SubscriptionReport"} 2025-08-18T00:42:20.776Z INFO controller/controller.go:175 Starting EventSource {"controller": "event", "controllerGroup": "", "controllerKind": "Event", "source": "kind source: *v1.Event"} 2025-08-18T00:42:20.776Z INFO controller/controller.go:183 Starting Controller {"controller": "event", "controllerGroup": "", "controllerKind": "Event"} 2025-08-18T00:42:20.895Z INFO controller/controller.go:217 Starting workers {"controller": "event", "controllerGroup": "", "controllerKind": "Event", "worker count": 1} 2025-08-18T00:42:20.895Z INFO controller/controller.go:217 Starting workers {"controller": "managedcluster", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedCluster", "worker count": 1} 2025-08-18T00:42:20.895Z INFO controller/controller.go:217 Starting workers {"controller": "configmap", "controllerGroup": "", "controllerKind": "ConfigMap", "worker count": 1} 2025-08-18T00:42:20.895Z INFO configmap/config_controller.go:105 setting resync.managedcluster interval to 30m0s 2025-08-18T00:42:20.895Z INFO configmap/config_controller.go:96 managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:42:20.895Z INFO configmap/config_controller.go:105 setting resync.policy.localspec interval to 45m0s 2025-08-18T00:42:20.895Z INFO configmap/config_controller.go:105 setting policy.localspec interval to 3s 2025-08-18T00:42:20.895Z INFO configmap/config_controller.go:105 setting resync.managedhub.info interval to 2h0m0s 2025-08-18T00:42:20.895Z INFO configmap/config_controller.go:105 setting managedhub.info interval to 2s 2025-08-18T00:42:20.895Z INFO configmap/config_controller.go:105 setting resync.managedhub.heartbeat interval to 20m0s 2025-08-18T00:42:20.895Z INFO configmap/config_controller.go:105 setting managedhub.heartbeat interval to 2s 2025-08-18T00:42:20.896Z INFO configmap/config_controller.go:105 setting resync.event.managedcluster interval to 25m0s 2025-08-18T00:42:20.896Z INFO configmap/config_controller.go:105 setting event.managedcluster interval to 3s 2025-08-18T00:42:20.896Z INFO configmap/config_controller.go:112 aggregationLevel not defined in agentConfig, using default value 2025-08-18T00:42:20.896Z INFO configmap/config_controller.go:112 enableLocalPolicies not defined in agentConfig, using default value 2025-08-18T00:42:20.978Z INFO controller/controller.go:217 Starting workers {"controller": "clusterversion", "controllerGroup": "config.openshift.io", "controllerKind": "ClusterVersion", "worker count": 1} •2025-08-18T00:42:21.790Z ERROR configmap/config_controller.go:102 failed to parse resync.managedcluster sync interval: time: invalid duration "also-invalid" github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap.(*hubOfHubsConfigController).setSyncInterval /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap/config_controller.go:102 github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap.(*hubOfHubsConfigController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap/config_controller.go:65 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:21.790Z ERROR configmap/config_controller.go:102 failed to parse managedcluster sync interval: time: invalid duration "invalid-duration" github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap.(*hubOfHubsConfigController).setSyncInterval /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap/config_controller.go:102 github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap.(*hubOfHubsConfigController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap/config_controller.go:66 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:21.790Z INFO configmap/config_controller.go:96 resync.policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:42:21.790Z INFO configmap/config_controller.go:96 policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:42:21.790Z INFO configmap/config_controller.go:96 resync.managedhub.info sync interval not defined in configmap, using default value 2025-08-18T00:42:21.790Z INFO configmap/config_controller.go:105 setting managedhub.info interval to 3s 2025-08-18T00:42:21.790Z INFO configmap/config_controller.go:96 resync.managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:42:21.790Z INFO configmap/config_controller.go:96 managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:42:21.790Z INFO configmap/config_controller.go:96 resync.event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:42:21.790Z INFO configmap/config_controller.go:96 event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:42:21.790Z INFO configmap/config_controller.go:112 aggregationLevel not defined in agentConfig, using default value 2025-08-18T00:42:21.790Z INFO configmap/config_controller.go:112 enableLocalPolicies not defined in agentConfig, using default value 2025-08-18T00:42:22.776Z INFO status.hub_cluster_info generic/multi_object_syncer.go:92 sync interval has been reset to 3s •2025-08-18T00:42:23.797Z INFO configmap/config_controller.go:96 resync.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:42:23.797Z INFO configmap/config_controller.go:96 managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:42:23.797Z INFO configmap/config_controller.go:96 resync.policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:42:23.797Z INFO configmap/config_controller.go:96 policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:42:23.797Z INFO configmap/config_controller.go:96 resync.managedhub.info sync interval not defined in configmap, using default value 2025-08-18T00:42:23.797Z INFO configmap/config_controller.go:96 managedhub.info sync interval not defined in configmap, using default value 2025-08-18T00:42:23.797Z INFO configmap/config_controller.go:96 resync.managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:42:23.797Z INFO configmap/config_controller.go:96 managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:42:23.797Z INFO configmap/config_controller.go:96 resync.event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:42:23.797Z INFO configmap/config_controller.go:96 event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:42:23.797Z INFO configmap/config_controller.go:112 aggregationLevel not defined in agentConfig, using default value 2025-08-18T00:42:23.797Z INFO logger/level.go:37 set the logLevel: debug 2025-08-18T00:42:23.797Z DEBUG configmap/config_controller.go:89 Reconciliation complete. {"Request.Namespace": "multicluster-global-hub-agent", "Request.Name": "multicluster-global-hub-agent-config"} 2025-08-18T00:42:24.666Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:42:25.663Z INFO status.placement generic/multi_event_syncer.go:147 sync interval has been reset to 3s 2025-08-18T00:42:25.661Z INFO status.policy generic/multi_event_syncer.go:147 sync interval has been reset to 3s 2025-08-18T00:42:25.663Z INFO status.placement_decision generic/multi_event_syncer.go:147 sync interval has been reset to 3s 2025-08-18T00:42:25.776Z INFO status.subscription_report generic/multi_event_syncer.go:147 sync interval has been reset to 3s 2025-08-18T00:42:25.777Z INFO status.event generic/multi_event_syncer.go:147 sync interval has been reset to 3s •2025-08-18T00:42:26.663Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:42:27.835Z INFO configmap/config_controller.go:96 resync.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:42:27.835Z INFO configmap/config_controller.go:96 managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:42:27.835Z INFO configmap/config_controller.go:96 resync.policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:42:27.835Z INFO configmap/config_controller.go:96 policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:42:27.835Z INFO configmap/config_controller.go:96 resync.managedhub.info sync interval not defined in configmap, using default value 2025-08-18T00:42:27.835Z INFO configmap/config_controller.go:105 setting managedhub.info interval to 1s 2025-08-18T00:42:27.835Z INFO configmap/config_controller.go:96 resync.managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:42:27.835Z INFO configmap/config_controller.go:96 managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:42:27.835Z INFO configmap/config_controller.go:96 resync.event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:42:27.835Z INFO configmap/config_controller.go:96 event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:42:27.835Z INFO configmap/config_controller.go:112 aggregationLevel not defined in agentConfig, using default value 2025-08-18T00:42:27.835Z INFO configmap/config_controller.go:112 enableLocalPolicies not defined in agentConfig, using default value 2025-08-18T00:42:27.835Z DEBUG configmap/config_controller.go:89 Reconciliation complete. {"Request.Namespace": "multicluster-global-hub-agent", "Request.Name": "multicluster-global-hub-agent-config"} •2025-08-18T00:42:27.847Z INFO configmap/config_controller.go:96 resync.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:42:27.847Z INFO configmap/config_controller.go:96 managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:42:27.847Z INFO configmap/config_controller.go:96 resync.policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:42:27.847Z INFO configmap/config_controller.go:96 policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:42:27.847Z INFO configmap/config_controller.go:96 resync.managedhub.info sync interval not defined in configmap, using default value 2025-08-18T00:42:27.847Z INFO configmap/config_controller.go:96 managedhub.info sync interval not defined in configmap, using default value 2025-08-18T00:42:27.847Z INFO configmap/config_controller.go:96 resync.managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:42:27.847Z INFO configmap/config_controller.go:96 managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:42:27.847Z INFO configmap/config_controller.go:96 resync.event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:42:27.847Z INFO configmap/config_controller.go:96 event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:42:27.847Z INFO configmap/config_controller.go:112 aggregationLevel not defined in agentConfig, using default value 2025-08-18T00:42:27.847Z INFO configmap/config_controller.go:112 enableLocalPolicies not defined in agentConfig, using default value 2025-08-18T00:42:27.847Z DEBUG configmap/config_controller.go:89 Reconciliation complete. {"Request.Namespace": "multicluster-global-hub-agent", "Request.Name": "multicluster-global-hub-agent-config"} 2025-08-18T00:42:28.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:42:28.777Z INFO status.hub_cluster_info generic/multi_object_syncer.go:92 sync interval has been reset to 1s •2025-08-18T00:42:29.857Z ERROR configmap/config_controller.go:102 failed to parse resync.managedcluster sync interval: time: invalid duration "not-a-duration" github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap.(*hubOfHubsConfigController).setSyncInterval /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap/config_controller.go:102 github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap.(*hubOfHubsConfigController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap/config_controller.go:65 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:29.857Z INFO configmap/config_controller.go:105 setting managedcluster interval to 4s 2025-08-18T00:42:29.857Z INFO configmap/config_controller.go:96 resync.policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:42:29.857Z INFO configmap/config_controller.go:96 policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:42:29.857Z INFO configmap/config_controller.go:105 setting resync.managedhub.info interval to 35m0s 2025-08-18T00:42:29.857Z INFO configmap/config_controller.go:105 setting managedhub.info interval to 3s 2025-08-18T00:42:29.857Z INFO configmap/config_controller.go:96 resync.managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:42:29.857Z INFO configmap/config_controller.go:96 managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:42:29.857Z INFO configmap/config_controller.go:96 resync.event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:42:29.857Z INFO configmap/config_controller.go:96 event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:42:29.857Z INFO configmap/config_controller.go:112 aggregationLevel not defined in agentConfig, using default value 2025-08-18T00:42:29.857Z INFO configmap/config_controller.go:112 enableLocalPolicies not defined in agentConfig, using default value 2025-08-18T00:42:29.857Z DEBUG configmap/config_controller.go:89 Reconciliation complete. {"Request.Namespace": "multicluster-global-hub-agent", "Request.Name": "multicluster-global-hub-agent-config"} 2025-08-18T00:42:30.666Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:42:30.783Z INFO status.hub_cluster_info generic/multi_object_syncer.go:92 sync interval has been reset to 3s •2025-08-18T00:42:32.663Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:42:34.666Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:42:34.821Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "event.clustergroupupgrade"} >>>>>>>>>>>>>>>>>>> cgu event Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.event.clustergroupupgrade source: hub1 id: 3379dd17-f858-4482-b8f0-7ee3c2cf8ee3 time: 2025-08-18T00:42:34.821354011Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "eventNamespace": "cgu-ns1", "eventName": "cgu-ns1.event.17cd34e8c8b27fdd", "eventAnnotations": { "cgu.openshift.io/event-type": "global", "cgu.openshift.io/total-clusters-count": "2" }, "cguName": "test-cgu1", "leafHubName": "hub1", "message": "ClusterGroupUpgrade test-cgu1 succeeded remediating policies", "reason": "CguSuccess", "reportingController": "cgu-controller", "reportingInstance": "cgu-controller-6794cf54d9-j7lgm", "type": "Normal", "createdAt": "2025-08-18T00:42:31Z" } ] •Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.managedhub.heartbeat source: hub1 id: 1b293aee-4444-4232-9dfa-6e6ba03839c8 time: 2025-08-18T00:42:20.663710436Z datacontenttype: application/json Extensions, extversion: 0.0 Data, [] •2025-08-18T00:42:34.915Z DEBUG status.&TypeMeta{Kind:,APIVersion:,} generic/multi_object_syncer.go:187 Reconciliation complete. {"Namespace": "", "Name": "version"} 2025-08-18T00:42:34.915Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.signatureStores" 2025-08-18T00:42:36.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:42:36.784Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.info"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.managedhub.info source: hub1 id: 2c684abe-e7cd-44b5-ba1b-89f253311f32 time: 2025-08-18T00:42:36.784734432Z datacontenttype: application/json Extensions, extversion: 0.1 Data, { "consoleURL": "", "grafanaURL": "", "mchVersion": "", "clusterId": "00000000-0000-0000-0000-000000000001" } 2025-08-18T00:42:36.792Z DEBUG status.&TypeMeta{Kind:,APIVersion:,} generic/multi_object_syncer.go:187 Reconciliation complete. {"Namespace": "openshift-console", "Name": "console"} 2025-08-18T00:42:36.798Z DEBUG status.&TypeMeta{Kind:,APIVersion:,} generic/multi_object_syncer.go:187 Reconciliation complete. {"Namespace": "open-cluster-management-observability", "Name": "grafana"} 2025-08-18T00:42:38.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:42:39.785Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.info"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.managedhub.info source: hub1 id: a8d30db7-a59c-4bd1-ae7c-4e2f469b400a time: 2025-08-18T00:42:39.785116085Z datacontenttype: application/json Extensions, extversion: 1.3 Data, { "consoleURL": "https://console-openshift-console.apps.test-cluster", "grafanaURL": "https://grafana-open-cluster-management-observability.apps.test-cluster", "mchVersion": "", "clusterId": "00000000-0000-0000-0000-000000000001" } •2025-08-18T00:42:39.790Z INFO KubeAPIWarningLogger log/warning_handler.go:65 metadata.finalizers: "cleaning-up": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers 2025-08-18T00:42:40.662Z DEBUG emitters/object_emitter.go:281 sending cloudevents: Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster source: hub1 id: datacontenttype: application/json Extensions, extversion: 3.1 Data (binary), { "update": [ { "kind": "ManagedCluster", "apiVersion": "cluster.open-cluster-management.io/v1", "metadata": { "name": "test-mc-1", "uid": "f7fac798-06d5-4f5d-a6a4-d4cf335d8797", "resourceVersion": "370", "generation": 1, "creationTimestamp": "2025-08-18T00:42:39Z", "labels": { "cloud": "Other", "vendor": "Other" }, "annotations": { "cloud": "Other", "global-hub.open-cluster-management.io/managed-by": "hub1", "vendor": "Other" } }, "spec": { "hubAcceptsClient": true, "leaseDurationSeconds": 60 }, "status": { "conditions": null, "version": {}, "clusterClaims": [ { "name": "id.k8s.io", "value": "2f9c3a64-8d57-4a43-9a70-2f8d4ef67259" } ] } } ] } 2025-08-18T00:42:40.662Z DEBUG emitters/object_emitter.go:290 sending {"type": "managedcluster", "create": 0, "update": 1, "delete": 0, "resync": 0, "resync_metadata": 0} 2025-08-18T00:42:40.662Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedcluster"} init cluster Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster source: hub1 id: 65ebae9c-520f-4af0-b9d4-a20b342a75e0 time: 2025-08-18T00:42:40.662485525Z datacontenttype: application/json Extensions, extversion: 3.1 Data (binary), { "update": [ { "kind": "ManagedCluster", "apiVersion": "cluster.open-cluster-management.io/v1", "metadata": { "name": "test-mc-1", "uid": "f7fac798-06d5-4f5d-a6a4-d4cf335d8797", "resourceVersion": "370", "generation": 1, "creationTimestamp": "2025-08-18T00:42:39Z", "labels": { "cloud": "Other", "vendor": "Other" }, "annotations": { "cloud": "Other", "global-hub.open-cluster-management.io/managed-by": "hub1", "vendor": "Other" } }, "spec": { "hubAcceptsClient": true, "leaseDurationSeconds": 60 }, "status": { "conditions": null, "version": {}, "clusterClaims": [ { "name": "id.k8s.io", "value": "2f9c3a64-8d57-4a43-9a70-2f8d4ef67259" } ] } } ] } •2025-08-18T00:42:40.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:42:42.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:42:44.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:42:45.662Z DEBUG emitters/object_emitter.go:281 sending cloudevents: Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster source: hub1 id: datacontenttype: application/json Extensions, extversion: 4.2 Data (binary), { "delete": [ { "id": "2f9c3a64-8d57-4a43-9a70-2f8d4ef67259", "name": "test-mc-1" } ] } 2025-08-18T00:42:45.662Z DEBUG emitters/object_emitter.go:290 sending {"type": "managedcluster", "create": 0, "update": 0, "delete": 1, "resync": 0, "resync_metadata": 0} 2025-08-18T00:42:45.662Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedcluster"} empty cluster: Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster source: hub1 id: 439f36d3-27d4-4594-9209-e7e6b6773927 time: 2025-08-18T00:42:45.662295563Z datacontenttype: application/json Extensions, extversion: 4.2 Data (binary), { "delete": [ { "id": "2f9c3a64-8d57-4a43-9a70-2f8d4ef67259", "name": "test-mc-1" } ] } •2025-08-18T00:42:46.662Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.completecompliance"} 2025-08-18T00:42:46.662Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.compliance"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.completecompliance source: hub1 id: 54f22f39-f2bf-4940-b995-b27289ee8a24 time: 2025-08-18T00:42:46.6623045Z datacontenttype: application/json Extensions, extdependencyversion: 1.1 extversion: 0.1 Data, [ { "policyId": "test-globalpolicy-uid", "nonCompliantClusters": [ "hub1-mc2", "hub1-mc3" ], "unknownComplianceClusters": [], "pendingComplianceClusters": [] } ] 2025-08-18T00:42:46.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.compliance source: hub1 id: a1cd3457-ff86-4b72-80ac-9cfb07db27d0 time: 2025-08-18T00:42:46.662255306Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "policyId": "test-globalpolicy-uid", "compliantClusters": [ "hub1-mc1" ], "nonCompliantClusters": [ "hub1-mc2", "hub1-mc3" ], "unknownComplianceClusters": [], "pendingComplianceClusters": [] } ] •2025-08-18T00:42:48.663Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:42:49.661Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.completecompliance"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.completecompliance source: hub1 id: 458db002-defb-484d-a62d-80a0e2639e97 time: 2025-08-18T00:42:49.661801362Z datacontenttype: application/json Extensions, extdependencyversion: 1.1 extversion: 1.2 Data, [ { "policyId": "test-globalpolicy-uid", "nonCompliantClusters": [ "hub1-mc3" ], "unknownComplianceClusters": [], "pendingComplianceClusters": [] } ] •2025-08-18T00:42:50.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:42:52.662Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.compliance"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.compliance source: hub1 id: 734fcc77-a93c-4248-bb7b-db77a86bff36 time: 2025-08-18T00:42:52.662362761Z datacontenttype: application/json Extensions, extversion: 1.2 Data, [ { "policyId": "test-globalpolicy-uid", "compliantClusters": [ "hub1-mc1" ], "nonCompliantClusters": [ "hub1-mc3" ], "unknownComplianceClusters": [], "pendingComplianceClusters": [] } ] •2025-08-18T00:42:52.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:42:54.663Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:42:55.662Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.compliance"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.compliance source: hub1 id: 9d38fe31-ad4e-4fec-9566-fde9b69ffb21 time: 2025-08-18T00:42:55.662339944Z datacontenttype: application/json Extensions, extversion: 2.3 Data, [ { "policyId": "test-globalpolicy-uid", "compliantClusters": [], "nonCompliantClusters": [], "unknownComplianceClusters": [], "pendingComplianceClusters": [] } ] •2025-08-18T00:42:55.778Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "event.managedcluster"} >>>>>>>>>>>>>>>>>>> managed cluster event Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.event.managedcluster source: hub1 id: 9ffbb2e8-0e9c-45af-bc7a-ad170be8d9f1 time: 2025-08-18T00:42:55.778357967Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "eventNamespace": "cluster2", "eventName": "cluster2.event.17cd34e8c8b27fdd", "clusterName": "cluster2", "clusterId": "4f406177-34b2-4852-88dd-ff2809680444", "leafHubName": "hub1", "message": "The managed cluster (cluster2) cannot connect to the hub cluster.", "reason": "AvailableUnknown", "reportingController": "registration-controller", "reportingInstance": "registration-controller-cluster-manager-registration-controller-6794cf54d9-j7lgm", "type": "Warning", "createdAt": "2025-08-18T00:42:55Z" } ] •2025-08-18T00:42:56.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:42:58.662Z DEBUG emitters/object_emitter.go:281 sending cloudevents: Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec source: hub1 id: datacontenttype: application/json Extensions, extversion: 9.1 Data (binary), { "update": [ { "kind": "Policy", "apiVersion": "policy.open-cluster-management.io/v1", "metadata": { "name": "root-policy-test123", "namespace": "default", "uid": "f94bd32e-1ced-4cc1-bd16-2180735e01bf", "resourceVersion": "386", "generation": 1, "creationTimestamp": "2025-08-18T00:42:55Z" }, "spec": { "disabled": true, "policy-templates": [] }, "status": {} } ] } 2025-08-18T00:42:58.662Z DEBUG emitters/object_emitter.go:290 sending {"type": "policy.localspec", "create": 0, "update": 1, "delete": 0, "resync": 0, "resync_metadata": 0} 2025-08-18T00:42:58.662Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.localspec"} ============================ create policy -> policy spec event: disabled Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec source: hub1 id: 84cae01d-d07c-4deb-a6f2-77fd175016f8 time: 2025-08-18T00:42:58.662292342Z datacontenttype: application/json Extensions, extversion: 9.1 Data (binary), { "update": [ { "kind": "Policy", "apiVersion": "policy.open-cluster-management.io/v1", "metadata": { "name": "root-policy-test123", "namespace": "default", "uid": "f94bd32e-1ced-4cc1-bd16-2180735e01bf", "resourceVersion": "386", "generation": 1, "creationTimestamp": "2025-08-18T00:42:55Z" }, "spec": { "disabled": true, "policy-templates": [] }, "status": {} } ] } 2025-08-18T00:42:58.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:42:59.662Z DEBUG emitters/object_emitter.go:281 sending cloudevents: Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster source: hub1 id: datacontenttype: application/json Extensions, extversion: 7.3 Data (binary), { "update": [ { "kind": "ManagedCluster", "apiVersion": "cluster.open-cluster-management.io/v1", "metadata": { "name": "cluster2", "uid": "6016e61d-b300-425f-ad81-23f20e4b5ec7", "resourceVersion": "384", "generation": 1, "creationTimestamp": "2025-08-18T00:42:55Z", "annotations": { "global-hub.open-cluster-management.io/managed-by": "hub1" } }, "spec": { "hubAcceptsClient": false, "leaseDurationSeconds": 60 }, "status": { "conditions": null, "version": {}, "clusterClaims": [ { "name": "id.k8s.io", "value": "4f406177-34b2-4852-88dd-ff2809680444" } ] } } ] } 2025-08-18T00:42:59.662Z DEBUG emitters/object_emitter.go:290 sending {"type": "managedcluster", "create": 0, "update": 1, "delete": 0, "resync": 0, "resync_metadata": 0} 2025-08-18T00:42:59.662Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedcluster"} 2025-08-18T00:43:00.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:43:01.662Z DEBUG emitters/object_emitter.go:281 sending cloudevents: Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec source: hub1 id: datacontenttype: application/json Extensions, extversion: 10.2 Data (binary), { "update": [ { "kind": "Policy", "apiVersion": "policy.open-cluster-management.io/v1", "metadata": { "name": "root-policy-test123", "namespace": "default", "uid": "f94bd32e-1ced-4cc1-bd16-2180735e01bf", "resourceVersion": "387", "generation": 2, "creationTimestamp": "2025-08-18T00:42:55Z" }, "spec": { "disabled": false, "policy-templates": [] }, "status": {} } ] } 2025-08-18T00:43:01.662Z DEBUG emitters/object_emitter.go:290 sending {"type": "policy.localspec", "create": 0, "update": 1, "delete": 0, "resync": 0, "resync_metadata": 0} 2025-08-18T00:43:01.662Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.localspec"} ============================ update policy -> policy spec event: enabled Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec source: hub1 id: d410513b-8a46-4d2d-b1f8-7ddeea895864 time: 2025-08-18T00:43:01.662744527Z datacontenttype: application/json Extensions, extversion: 10.2 Data (binary), { "update": [ { "kind": "Policy", "apiVersion": "policy.open-cluster-management.io/v1", "metadata": { "name": "root-policy-test123", "namespace": "default", "uid": "f94bd32e-1ced-4cc1-bd16-2180735e01bf", "resourceVersion": "387", "generation": 2, "creationTimestamp": "2025-08-18T00:42:55Z" }, "spec": { "disabled": false, "policy-templates": [] }, "status": {} } ] } •2025-08-18T00:43:02.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:43:04.662Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.localcompliance"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcompliance source: hub1 id: 6281b754-2a6a-46cf-938a-51b414029af7 time: 2025-08-18T00:43:04.662251314Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "policyId": "f94bd32e-1ced-4cc1-bd16-2180735e01bf", "compliantClusters": [ "policy-cluster1" ], "nonCompliantClusters": [], "unknownComplianceClusters": [], "pendingComplianceClusters": [] } ] •2025-08-18T00:43:04.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:43:06.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:43:07.662Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.localcompletecompliance"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcompletecompliance source: hub1 id: 86cab0a3-d402-4847-9a52-b0c0befbdf62 time: 2025-08-18T00:43:07.662305002Z datacontenttype: application/json Extensions, extdependencyversion: 1.1 extversion: 0.1 Data, [ { "policyId": "f94bd32e-1ced-4cc1-bd16-2180735e01bf", "nonCompliantClusters": [ "policy-cluster1" ], "unknownComplianceClusters": [], "pendingComplianceClusters": [] } ] •2025-08-18T00:43:08.662Z DEBUG emitters/object_emitter.go:281 sending cloudevents: Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster source: hub1 id: datacontenttype: application/json Extensions, extversion: 9.4 Data (binary), { "update": [ { "kind": "ManagedCluster", "apiVersion": "cluster.open-cluster-management.io/v1", "metadata": { "name": "policy-cluster1", "uid": "5095ea46-31ab-44ec-ac27-70fa48668c46", "resourceVersion": "394", "generation": 1, "creationTimestamp": "2025-08-18T00:43:07Z", "annotations": { "global-hub.open-cluster-management.io/managed-by": "hub1" } }, "spec": { "hubAcceptsClient": false, "leaseDurationSeconds": 60 }, "status": { "conditions": null, "version": {}, "clusterClaims": [ { "name": "id.k8s.io", "value": "3f406177-34b2-4852-88dd-ff2809680336" } ] } } ] } 2025-08-18T00:43:08.662Z DEBUG emitters/object_emitter.go:290 sending {"type": "managedcluster", "create": 0, "update": 1, "delete": 0, "resync": 0, "resync_metadata": 0} 2025-08-18T00:43:08.662Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedcluster"} 2025-08-18T00:43:08.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:43:10.662Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "event.localreplicatedpolicy"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.event.localreplicatedpolicy source: hub1 id: 831c073a-603d-42ca-b6f9-44be2f8f2b0c time: 2025-08-18T00:43:10.662404837Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "eventName": "default.root-policy-test123.17b0db2427432200", "eventNamespace": "policy-cluster1", "message": "NonCompliant; violation - limitranges [container-mem-limit-range] not found in namespace\n\t\t\t\t\t\t\tdefault", "reason": "PolicyStatusSync", "count": 1, "source": { "component": "policy-status-history-sync" }, "createdAt": "2025-08-18T00:43:07Z", "policyId": "f94bd32e-1ced-4cc1-bd16-2180735e01bf", "clusterId": "3f406177-34b2-4852-88dd-ff2809680336", "clusterName": "policy-cluster1", "compliance": "NonCompliant" } ] •2025-08-18T00:43:10.663Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:43:10.666Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.decisionStrategy" 2025-08-18T00:43:10.666Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.spreadPolicy" 2025-08-18T00:43:10.666Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "status.decisionGroups" 2025-08-18T00:43:10.667Z DEBUG generic/generic_handler.go:42 update bundle by object: &{{Placement cluster.open-cluster-management.io/v1beta1} {test-globalplacement-1 default 4086bfcf-7c4d-40e6-b4c9-e494b5cfe865 397 1 2025-08-18 00:43:10 +0000 UTC map[] map[global-hub.open-cluster-management.io/origin-ownerreference-uid:test-globalplacement-uid] [] [] []} {[] [] {Additive []} {[]} [] {{[] {0 0 }}}} {0 [] []}} 2025-08-18T00:43:10.669Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.decisionStrategy" 2025-08-18T00:43:10.669Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.spreadPolicy" 2025-08-18T00:43:10.669Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "status.decisionGroups" 2025-08-18T00:43:10.670Z DEBUG generic/generic_handler.go:42 update bundle by object: &{{Placement cluster.open-cluster-management.io/v1beta1} {test-globalplacement-1 default 4086bfcf-7c4d-40e6-b4c9-e494b5cfe865 398 1 2025-08-18 00:43:10 +0000 UTC map[] map[global-hub.open-cluster-management.io/origin-ownerreference-uid:test-globalplacement-uid] [] [] []} {[] [] {Additive []} {[]} [] {{[] {0 0 }}}} {0 [] []}} 2025-08-18T00:43:12.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:43:13.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "placement.spec"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.placement.spec source: hub1 id: 26eab6a0-48b9-411a-a04f-9b4ed20897f8 time: 2025-08-18T00:43:13.664082955Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "kind": "Placement", "apiVersion": "cluster.open-cluster-management.io/v1beta1", "metadata": { "name": "test-globalplacement-1", "namespace": "default", "uid": "4086bfcf-7c4d-40e6-b4c9-e494b5cfe865", "resourceVersion": "398", "generation": 1, "creationTimestamp": "2025-08-18T00:43:10Z", "annotations": { "global-hub.open-cluster-management.io/origin-ownerreference-uid": "test-globalplacement-uid" }, "finalizers": [ "global-hub.open-cluster-management.io/resource-cleanup" ], "managedFields": [ { "manager": "status.test", "operation": "Update", "apiVersion": "cluster.open-cluster-management.io/v1beta1", "time": "2025-08-18T00:43:10Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { ".": {}, "f:global-hub.open-cluster-management.io/origin-ownerreference-uid": {} }, "f:finalizers": { ".": {}, "v:\"global-hub.open-cluster-management.io/resource-cleanup\"": {} } }, "f:spec": { ".": {}, "f:prioritizerPolicy": { ".": {}, "f:mode": {} } } } } ] }, "spec": { "prioritizerPolicy": { "mode": "Additive" }, "spreadPolicy": {}, "decisionStrategy": { "groupStrategy": { "clustersPerDecisionGroup": 0 } } }, "status": { "numberOfSelectedClusters": 0, "decisionGroups": null, "conditions": null } } ] •2025-08-18T00:43:13.666Z DEBUG generic/generic_handler.go:42 update bundle by object: &{{PlacementDecision cluster.open-cluster-management.io/v1beta1} {test-placementdecision-1 default 7e9502eb-7776-4908-b0bf-0c9445501164 399 1 2025-08-18 00:43:13 +0000 UTC map[] map[global-hub.open-cluster-management.io/origin-ownerreference-uid:test-globalplacement-decision-uid] [] [] []} {[]}} 2025-08-18T00:43:13.668Z DEBUG generic/generic_handler.go:42 update bundle by object: &{{PlacementDecision cluster.open-cluster-management.io/v1beta1} {test-placementdecision-1 default 7e9502eb-7776-4908-b0bf-0c9445501164 400 1 2025-08-18 00:43:13 +0000 UTC map[] map[global-hub.open-cluster-management.io/origin-ownerreference-uid:test-globalplacement-decision-uid] [] [] []} {[]}} 2025-08-18T00:43:14.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:43:16.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:43:16.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "placementdecision"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.placementdecision source: hub1 id: 26619c64-01b9-46e9-b9d9-cea3a959ed0e time: 2025-08-18T00:43:16.6640295Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "kind": "PlacementDecision", "apiVersion": "cluster.open-cluster-management.io/v1beta1", "metadata": { "name": "test-placementdecision-1", "namespace": "default", "uid": "7e9502eb-7776-4908-b0bf-0c9445501164", "resourceVersion": "400", "generation": 1, "creationTimestamp": "2025-08-18T00:43:13Z", "annotations": { "global-hub.open-cluster-management.io/origin-ownerreference-uid": "test-globalplacement-decision-uid" }, "finalizers": [ "global-hub.open-cluster-management.io/resource-cleanup" ], "managedFields": [ { "manager": "status.test", "operation": "Update", "apiVersion": "cluster.open-cluster-management.io/v1beta1", "time": "2025-08-18T00:43:13Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { ".": {}, "f:global-hub.open-cluster-management.io/origin-ownerreference-uid": {} }, "f:finalizers": { ".": {}, "v:\"global-hub.open-cluster-management.io/resource-cleanup\"": {} } } } } ] }, "status": { "decisions": null } } ] •2025-08-18T00:43:16.668Z DEBUG generic/generic_handler.go:42 update bundle by object: &{{SubscriptionReport apps.open-cluster-management.io/v1alpha1} {test-subscriptionreport-1 default 79b8365a-2faa-499d-8141-88feba9f1996 403 1 2025-08-18 00:43:16 +0000 UTC map[] map[] [] [] []} Application {1 0 0 0 1} [0xc001fc4120] [&ObjectReference{Kind:Deployment,Namespace:default,Name:nginx-sample,UID:,APIVersion:apps/v1,ResourceVersion:,FieldPath:,}]} 2025-08-18T00:43:16.777Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "subscription.report"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.subscription.report source: hub1 id: 7d36bb06-91b9-4214-8254-9c6764a185f6 time: 2025-08-18T00:43:16.777157089Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "kind": "SubscriptionReport", "apiVersion": "apps.open-cluster-management.io/v1alpha1", "metadata": { "name": "test-subscriptionreport-1", "namespace": "default", "uid": "79b8365a-2faa-499d-8141-88feba9f1996", "resourceVersion": "403", "generation": 1, "creationTimestamp": "2025-08-18T00:43:16Z" }, "reportType": "Application", "summary": { "deployed": "1", "inProgress": "0", "failed": "0", "propagationFailed": "0", "clusters": "1" }, "results": [ { "source": "hub1-mc1", "timestamp": { "seconds": 0, "nanos": 0 }, "result": "deployed" } ], "resources": [ { "kind": "Deployment", "namespace": "default", "name": "nginx-sample", "apiVersion": "apps/v1" } ] } ] •2025-08-18T00:43:18.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:43:19.662Z DEBUG emitters/object_emitter.go:281 sending cloudevents: Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec source: hub1 id: datacontenttype: application/json Extensions, extversion: 15.3 Data (binary), { "update": [ { "kind": "Policy", "apiVersion": "policy.open-cluster-management.io/v1", "metadata": { "name": "event-local-policy", "namespace": "default", "uid": "a33184a2-855a-480c-be7c-e45ca4895bcb", "resourceVersion": "404", "generation": 1, "creationTimestamp": "2025-08-18T00:43:16Z" }, "spec": { "disabled": true, "policy-templates": [] }, "status": {} } ] } 2025-08-18T00:43:19.662Z DEBUG emitters/object_emitter.go:290 sending {"type": "policy.localspec", "create": 0, "update": 1, "delete": 0, "resync": 0, "resync_metadata": 0} 2025-08-18T00:43:19.662Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.localspec"} 2025-08-18T00:43:19.778Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "event.localrootpolicy"} >>>>>>>>>>>>>>>>>>> root policy event1 Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.event.localrootpolicy source: hub1 id: 818cc826-811c-4636-a3e0-e8c61fed1e0f time: 2025-08-18T00:43:19.778362305Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "eventName": "event-local-policy.123r543243242", "eventNamespace": "default", "message": "Policy default/policy1 was propagated to cluster1", "reason": "PolicyPropagation", "source": { "component": "policy-propagator" }, "createdAt": "2025-08-18T00:43:16Z", "policyId": "a33184a2-855a-480c-be7c-e45ca4895bcb", "compliance": "Unknown" } ] •2025-08-18T00:43:20.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:43:22.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} >>>>>>> not get the new event: policy1.newer.123r543245555 [ { "eventName": "event-local-policy.123r543243242", "eventNamespace": "default", "message": "Policy default/policy1 was propagated to cluster1", "reason": "PolicyPropagation", "source": { "component": "policy-propagator" }, "createdAt": "2025-08-18T00:43:16Z", "policyId": "a33184a2-855a-480c-be7c-e45ca4895bcb", "compliance": "Unknown" } ] >>>>>>> not get the new event: policy1.newer.123r543245555 [ { "eventName": "event-local-policy.123r543243242", "eventNamespace": "default", "message": "Policy default/policy1 was propagated to cluster1", "reason": "PolicyPropagation", "source": { "component": "policy-propagator" }, "createdAt": "2025-08-18T00:43:16Z", "policyId": "a33184a2-855a-480c-be7c-e45ca4895bcb", "compliance": "Unknown" } ] 2025-08-18T00:43:24.664Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} >>>>>>> not get the new event: policy1.newer.123r543245555 [ { "eventName": "event-local-policy.123r543243242", "eventNamespace": "default", "message": "Policy default/policy1 was propagated to cluster1", "reason": "PolicyPropagation", "source": { "component": "policy-propagator" }, "createdAt": "2025-08-18T00:43:16Z", "policyId": "a33184a2-855a-480c-be7c-e45ca4895bcb", "compliance": "Unknown" } ] 2025-08-18T00:43:25.778Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "event.localrootpolicy"} >>>>>>>>>>>>>>>>>>> root policy event2 Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.event.localrootpolicy source: hub1 id: e3b0715d-16e5-4e94-bf82-7a20a755241c time: 2025-08-18T00:43:25.777945253Z datacontenttype: application/json Extensions, extversion: 1.2 Data, [ { "eventName": "policy1.newer.123r543245555", "eventNamespace": "default", "message": "Policy default/policy1 was propagated to cluster3", "reason": "PolicyPropagation", "source": { "component": "policy-propagator" }, "createdAt": "2025-08-18T00:43:22Z", "policyId": "a33184a2-855a-480c-be7c-e45ca4895bcb", "compliance": "Unknown" } ] •Scontext canceled, exiting... 2025-08-18T00:43:25.801Z INFO consumer/generic_consumer.go:179 receiver stopped 2025-08-18T00:43:25.801Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:43:25.801Z INFO consumer/generic_consumer.go:179 receiver stopped 2025-08-18T00:43:25.801Z INFO consumer/generic_consumer.go:179 receiver stopped 2025-08-18T00:43:25.801Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:43:25.801Z INFO generic/periodic_syncer.go:155 Stopping periodic syncer... 2025-08-18T00:43:25.801Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "clusterversion", "controllerGroup": "config.openshift.io", "controllerKind": "ClusterVersion"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "configmap", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "route", "controllerGroup": "route.openshift.io", "controllerKind": "Route"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "managedcluster", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedCluster"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "event", "controllerGroup": "", "controllerKind": "Event"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "policy.localspec", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:43:25.801Z INFO consumer/generic_consumer.go:179 receiver stopped 2025-08-18T00:43:25.801Z INFO controller/controller.go:239 All workers finished {"controller": "configmap", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:239 All workers finished {"controller": "event", "controllerGroup": "", "controllerKind": "Event"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "placementdecision", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "PlacementDecision"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:239 All workers finished {"controller": "policy.localspec", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:239 All workers finished {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:239 All workers finished {"controller": "managedcluster", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedCluster"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:239 All workers finished {"controller": "placementdecision", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "PlacementDecision"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:239 All workers finished {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "subscriptionreport", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "SubscriptionReport"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:239 All workers finished {"controller": "subscriptionreport", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "SubscriptionReport"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:239 All workers finished {"controller": "route", "controllerGroup": "route.openshift.io", "controllerKind": "Route"} 2025-08-18T00:43:25.801Z INFO controller/controller.go:239 All workers finished {"controller": "clusterversion", "controllerGroup": "config.openshift.io", "controllerKind": "ClusterVersion"} 2025-08-18T00:43:25.801Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:43:25.802205 24379 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1alpha1.SubscriptionReport" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:43:25.802363 24379 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.Policy" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:43:25.802Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:43:25.802Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:43:25.802Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager Ran 25 of 26 Specs in 76.223 seconds SUCCESS! -- 25 Passed | 0 Failed | 0 Pending | 1 Skipped --- PASS: TestControllers (76.22s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/agent/status 76.315s failed to get CustomResourceDefinition for subscriptionreports.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptionreports.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-yctml9n0:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for subscriptions.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptions.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-yctml9n0:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for policies.policy.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "policies.policy.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-yctml9n0:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope=== RUN TestNonK8sAPI Running Suite: NonK8s API Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/api ==================================================================================================================== Random Seed: 1755477730 Will run 6 of 6 specs The files belonging to this database system will be owned by user "1002500000". This user must also own the server process. The database cluster will be initialized with locale "C". The default database encoding has accordingly been set to "SQL_ASCII". The default text search configuration will be set to "english". Data page checksums are disabled. creating directory /tmp/tmp/embedded-postgres-go-50318/extracted/data ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... UTC creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok Success. You can now start the database server using: /tmp/tmp/embedded-postgres-go-50318/extracted/bin/pg_ctl -D /tmp/tmp/embedded-postgres-go-50318/extracted/data -l logfile start waiting for server to start....2025-08-18 00:42:14.821 UTC [24926] LOG: starting PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit 2025-08-18 00:42:14.821 UTC [24926] LOG: listening on IPv6 address "::1", port 50318 2025-08-18 00:42:14.821 UTC [24926] LOG: listening on IPv4 address "127.0.0.1", port 50318 2025-08-18 00:42:14.822 UTC [24926] LOG: listening on Unix socket "/tmp/.s.PGSQL.50318" 2025-08-18 00:42:14.824 UTC [24929] LOG: database system was shut down at 2025-08-18 00:42:14 UTC 2025-08-18 00:42:14.827 UTC [24926] LOG: database system is ready to accept connections done server started script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. script 1.upgrade.sql executed successfully. script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) failed to get CustomResourceDefinition for managedclusters.cluster.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "managedclusters.cluster.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-yctml9n0:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope[GIN-debug] GET /global-hub-api/v1/managedclusters --> github.com/stolostron/multicluster-global-hub/manager/pkg/restapis/managedclusters.ListManagedClusters.func1 (4 handlers) [GIN-debug] PATCH /global-hub-api/v1/managedcluster/:clusterID --> github.com/stolostron/multicluster-global-hub/manager/pkg/restapis.SetupRouter.PatchManagedCluster.func2 (4 handlers) [GIN-debug] GET /global-hub-api/v1/policies --> github.com/stolostron/multicluster-global-hub/manager/pkg/restapis.SetupRouter.ListPolicies.func3 (4 handlers) [GIN-debug] GET /global-hub-api/v1/policy/:policyID/status --> github.com/stolostron/multicluster-global-hub/manager/pkg/restapis.SetupRouter.GetPolicyStatus.func4 (4 handlers) [GIN-debug] GET /global-hub-api/v1/subscriptions --> github.com/stolostron/multicluster-global-hub/manager/pkg/restapis.SetupRouter.ListSubscriptions.func5 (4 handlers) [GIN-debug] GET /global-hub-api/v1/subscriptionreport/:subscriptionID --> github.com/stolostron/multicluster-global-hub/manager/pkg/restapis.SetupRouter.GetSubscriptionReport.func6 (4 handlers) got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned managed cluster name: , last returned managed cluster UID: 00000000-0000-0000-0000-000000000000 managedcluster list query: SELECT payload FROM status.managed_clusters WHERE deleted_at is NULL AND (payload -> 'metadata' ->> 'name', cluster_id) > ('', '00000000-0000-0000-0000-000000000000') ORDER BY (payload -> 'metadata' ->> 'name', cluster_id) [GIN] 2025/08/18 - 00:42:15 | 200 | 3.055244ms | | GET "/global-hub-api/v1/managedclusters" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned managed cluster name: , last returned managed cluster UID: 00000000-0000-0000-0000-000000000000 managedcluster list query: SELECT payload FROM status.managed_clusters WHERE deleted_at is NULL AND (payload -> 'metadata' ->> 'name', cluster_id) > ('', '00000000-0000-0000-0000-000000000000') ORDER BY (payload -> 'metadata' ->> 'name', cluster_id) [GIN] 2025/08/18 - 00:42:15 | 200 | 1.366585ms | | GET "/global-hub-api/v1/managedclusters" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned managed cluster name: , last returned managed cluster UID: 00000000-0000-0000-0000-000000000000 managedcluster list query: SELECT payload FROM status.managed_clusters WHERE deleted_at is NULL AND (payload -> 'metadata' ->> 'name', cluster_id) > ('', '00000000-0000-0000-0000-000000000000') ORDER BY (payload -> 'metadata' ->> 'name', cluster_id) [GIN] 2025/08/18 - 00:42:15 | 200 | 894.423µs | | GET "/global-hub-api/v1/managedclusters?continue=eyJsYXN0TmFtZSI6IiIsImxhc3RVSUQiOiIwMDAwMDAwMC0wMDAwLTAwMDAtMDAwMC0wMDAwMDAwMDAwMDAifQ" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: AND payload -> 'metadata' -> 'labels' @> '{"cloud": "Other"}' AND NOT (payload -> 'metadata' -> 'labels' @> '{"vendor": "Openshift"}') AND NOT (payload -> 'metadata' -> 'labels' ? 'testnokey') AND payload -> 'metadata' -> 'labels' ? 'vendor' limit: 2 last returned managed cluster name: , last returned managed cluster UID: 00000000-0000-0000-0000-000000000000 managedcluster list query: SELECT payload FROM status.managed_clusters WHERE deleted_at is NULL AND (payload -> 'metadata' ->> 'name', cluster_id) > ('', '00000000-0000-0000-0000-000000000000') AND payload -> 'metadata' -> 'labels' @> '{"cloud": "Other"}' AND NOT (payload -> 'metadata' -> 'labels' @> '{"vendor": "Openshift"}') AND NOT (payload -> 'metadata' -> 'labels' ? 'testnokey') AND payload -> 'metadata' -> 'labels' ? 'vendor' ORDER BY (payload -> 'metadata' ->> 'name', cluster_id) LIMIT 2 [GIN] 2025/08/18 - 00:42:15 | 200 | 1.389147ms | | GET "/global-hub-api/v1/managedclusters?limit=2&labelSelector=cloud%3DOther%2Cvendor%21%3DOpenshift%2C%21testnokey%2Cvendor" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned managed cluster name: , last returned managed cluster UID: 00000000-0000-0000-0000-000000000000 managedcluster list query: SELECT payload FROM status.managed_clusters WHERE deleted_at is NULL AND (payload -> 'metadata' ->> 'name', cluster_id) > ('', '00000000-0000-0000-0000-000000000000') ORDER BY (payload -> 'metadata' ->> 'name', cluster_id) Returning as table... [GIN] 2025/08/18 - 00:42:15 | 200 | 1.28093ms | | GET "/global-hub-api/v1/managedclusters" MCL Table {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names","priority":0},{"name":"Age","type":"date","format":"","description":"Custom resource definition column (in JSONPath format): .metadata.creationTimestamp","priority":0}],"rows":[{"cells":["mc1",null],"object":{"apiVersion":"cluster.open-cluster-management.io/v1","kind":"ManagedCluster","metadata":{"annotations":{"global-hub.open-cluster-management.io/managed-by":"hub1","open-cluster-management/created-via":"other"},"creationTimestamp":null,"labels":{"cloud":"Other","vendor":"Other"},"name":"mc1","uid":"2aa5547c-c172-47ed-b70b-db468c84d327"},"spec":{"hubAcceptsClient":true,"leaseDurationSeconds":60},"status":{"conditions":null,"version":{}}}},{"cells":["mc2",null],"object":{"apiVersion":"cluster.open-cluster-management.io/v1","kind":"ManagedCluster","metadata":{"annotations":{"global-hub.open-cluster-management.io/managed-by":"hub1","open-cluster-management/created-via":"other"},"creationTimestamp":null,"labels":{"cloud":"Other","vendor":"Other"},"name":"mc2","uid":"18c9e13c-4488-4dcd-a5ac-1196093abbc0"},"spec":{"hubAcceptsClient":true,"leaseDurationSeconds":60},"status":{"conditions":null,"version":{}}}}]} got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned managed cluster name: , last returned managed cluster UID: 00000000-0000-0000-0000-000000000000 managedcluster list query: SELECT payload FROM status.managed_clusters WHERE deleted_at is NULL AND (payload -> 'metadata' ->> 'name', cluster_id) > ('', '00000000-0000-0000-0000-000000000000') ORDER BY (payload -> 'metadata' ->> 'name', cluster_id) •[GIN] 2025/08/18 - 00:42:23 | 200 | 8.006534685s | | GET "/global-hub-api/v1/managedclusters?watch" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] patch for cluster with ID: 2aa5547c-c172-47ed-b70b-db468c84d327 patch for managed cluster: mc1 -leaf hub: hub1 labels to add: map[foo:bar] labels to remove: map[] [GIN] 2025/08/18 - 00:42:23 | 200 | 2.680642ms | | PATCH "/global-hub-api/v1/managedcluster/2aa5547c-c172-47ed-b70b-db468c84d327" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] patch for cluster with ID: 2aa5547c-c172-47ed-b70b-db468c84d327 patch for managed cluster: mc1 -leaf hub: hub1 labels to add: map[foo:test] labels to remove: map[] [GIN] 2025/08/18 - 00:42:23 | 200 | 1.768447ms | | PATCH "/global-hub-api/v1/managedcluster/2aa5547c-c172-47ed-b70b-db468c84d327" •got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned policy name: , last returned policy] UID: last policy query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') DESC LIMIT 1 policy list query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') policy compliance query with policy ID: SELECT cluster_name,leaf_hub_name,compliance FROM status.compliance WHERE policy_id = ? ORDER BY leaf_hub_name, cluster_name policy&placementbinding&placementrule mapping query: SELECT p.payload -> 'metadata' ->> 'name' AS policy, pb.payload -> 'metadata' ->> 'name' AS binding, pr.payload -> 'metadata' ->> 'name' AS placementrule FROM spec.policies p INNER JOIN spec.placementbindings pb ON p.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pb.payload -> 'subjects' @> json_build_array(json_build_object( 'name', p.payload -> 'metadata' ->> 'name', 'kind', p.payload ->> 'kind', 'apiGroup', split_part(p.payload ->> 'apiVersion', '/',1) ))::jsonb INNER JOIN spec.placementrules pr ON pr.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pr.payload -> 'metadata' ->> 'name' = pb.payload -> 'placementRef' ->> 'name' AND pr.payload ->> 'kind' = pb.payload -> 'placementRef' ->> 'kind' AND split_part(pr.payload ->> 'apiVersion', '/', 1) = pb.payload -> 'placementRef' ->> 'apiGroup' [GIN] 2025/08/18 - 00:42:23 | 200 | 44.900767ms | | GET "/global-hub-api/v1/policies" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned policy name: , last returned policy] UID: last policy query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') DESC LIMIT 1 policy list query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') policy compliance query with policy ID: SELECT cluster_name,leaf_hub_name,compliance FROM status.compliance WHERE policy_id = ? ORDER BY leaf_hub_name, cluster_name policy&placementbinding&placementrule mapping query: SELECT p.payload -> 'metadata' ->> 'name' AS policy, pb.payload -> 'metadata' ->> 'name' AS binding, pr.payload -> 'metadata' ->> 'name' AS placementrule FROM spec.policies p INNER JOIN spec.placementbindings pb ON p.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pb.payload -> 'subjects' @> json_build_array(json_build_object( 'name', p.payload -> 'metadata' ->> 'name', 'kind', p.payload ->> 'kind', 'apiGroup', split_part(p.payload ->> 'apiVersion', '/',1) ))::jsonb INNER JOIN spec.placementrules pr ON pr.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pr.payload -> 'metadata' ->> 'name' = pb.payload -> 'placementRef' ->> 'name' AND pr.payload ->> 'kind' = pb.payload -> 'placementRef' ->> 'kind' AND split_part(pr.payload ->> 'apiVersion', '/', 1) = pb.payload -> 'placementRef' ->> 'apiGroup' [GIN] 2025/08/18 - 00:42:23 | 200 | 115.781143ms | | GET "/global-hub-api/v1/policies" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: AND payload -> 'metadata' -> 'labels' @> '{"foo": "bar"}' AND NOT (payload -> 'metadata' -> 'labels' @> '{"env": "dev"}') AND NOT (payload -> 'metadata' -> 'labels' ? 'testnokey') AND payload -> 'metadata' -> 'labels' ? 'foo' limit: last returned policy name: , last returned policy] UID: last policy query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') DESC LIMIT 1 policy list query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') AND payload -> 'metadata' -> 'labels' @> '{"foo": "bar"}' AND NOT (payload -> 'metadata' -> 'labels' @> '{"env": "dev"}') AND NOT (payload -> 'metadata' -> 'labels' ? 'testnokey') AND payload -> 'metadata' -> 'labels' ? 'foo' ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') policy compliance query with policy ID: SELECT cluster_name,leaf_hub_name,compliance FROM status.compliance WHERE policy_id = ? ORDER BY leaf_hub_name, cluster_name policy&placementbinding&placementrule mapping query: SELECT p.payload -> 'metadata' ->> 'name' AS policy, pb.payload -> 'metadata' ->> 'name' AS binding, pr.payload -> 'metadata' ->> 'name' AS placementrule FROM spec.policies p INNER JOIN spec.placementbindings pb ON p.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pb.payload -> 'subjects' @> json_build_array(json_build_object( 'name', p.payload -> 'metadata' ->> 'name', 'kind', p.payload ->> 'kind', 'apiGroup', split_part(p.payload ->> 'apiVersion', '/',1) ))::jsonb INNER JOIN spec.placementrules pr ON pr.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pr.payload -> 'metadata' ->> 'name' = pb.payload -> 'placementRef' ->> 'name' AND pr.payload ->> 'kind' = pb.payload -> 'placementRef' ->> 'kind' AND split_part(pr.payload ->> 'apiVersion', '/', 1) = pb.payload -> 'placementRef' ->> 'apiGroup' [GIN] 2025/08/18 - 00:42:23 | 200 | 3.527123ms | | GET "/global-hub-api/v1/policies?labelSelector=foo%3Dbar%2Cenv%21%3Ddev%2C%21testnokey%2Cfoo" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned policy name: , last returned policy] UID: last policy query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') DESC LIMIT 1 policy list query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') policy compliance query with policy ID: SELECT cluster_name,leaf_hub_name,compliance FROM status.compliance WHERE policy_id = ? ORDER BY leaf_hub_name, cluster_name policy&placementbinding&placementrule mapping query: SELECT p.payload -> 'metadata' ->> 'name' AS policy, pb.payload -> 'metadata' ->> 'name' AS binding, pr.payload -> 'metadata' ->> 'name' AS placementrule FROM spec.policies p INNER JOIN spec.placementbindings pb ON p.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pb.payload -> 'subjects' @> json_build_array(json_build_object( 'name', p.payload -> 'metadata' ->> 'name', 'kind', p.payload ->> 'kind', 'apiGroup', split_part(p.payload ->> 'apiVersion', '/',1) ))::jsonb INNER JOIN spec.placementrules pr ON pr.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pr.payload -> 'metadata' ->> 'name' = pb.payload -> 'placementRef' ->> 'name' AND pr.payload ->> 'kind' = pb.payload -> 'placementRef' ->> 'kind' AND split_part(pr.payload ->> 'apiVersion', '/', 1) = pb.payload -> 'placementRef' ->> 'apiGroup' Returning as table... [GIN] 2025/08/18 - 00:42:23 | 200 | 2.830944ms | | GET "/global-hub-api/v1/policies" Policy Table {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names","priority":0},{"name":"Age","type":"date","format":"","description":"Custom resource definition column (in JSONPath format): .metadata.creationTimestamp","priority":0}],"rows":[{"cells":["policy-config-audit",null],"object":{"apiVersion":"policy.open-cluster-management.io/v1","kind":"Policy","metadata":{"annotations":{"policy.open-cluster-management.io/categories":"AU Audit and Accountability","policy.open-cluster-management.io/controls":"AU-3 Content of Audit Records","policy.open-cluster-management.io/standards":"NIST SP 800-53"},"creationTimestamp":null,"labels":{"env":"production","foo":"bar"},"name":"policy-config-audit","namespace":"default"},"spec":{"disabled":false,"policy-templates":[{"objectDefinition":{"apiVersion":"policy.open-cluster-management.io/v1","kind":"ConfigurationPolicy","metadata":{"name":"policy-config-audit"},"spec":{"object-templates":[{"complianceType":"musthave","objectDefinition":{"apiVersion":"config.openshift.io/v1","kind":"APIServer","metadata":{"name":"cluster"},"spec":{"audit":{"customRules":[{"group":"system:authenticated:oauth","profile":"WriteRequestBodies"},{"group":"system:authenticated","profile":"AllRequestBodies"}]},"profile":"Default"}}}],"remediationAction":"inform","severity":"low"}}}],"remediationAction":"inform"},"status":{"compliant":"NonCompliant","placement":[{"placementBinding":"binding-config-audit","placementRule":"placement-config-audit"}],"status":[{"clustername":"mc1","clusternamespace":"mc1","compliant":"NonCompliant"},{"clustername":"mc2","clusternamespace":"mc2","compliant":"Compliant"}],"summary":{"complianceClusterNumber":1,"nonComplianceClusterNumber":1}}}}]} got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned policy name: , last returned policy] UID: last policy query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') DESC LIMIT 1 policy list query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') policy compliance query with policy ID: SELECT cluster_name,leaf_hub_name,compliance FROM status.compliance WHERE policy_id = ? ORDER BY leaf_hub_name, cluster_name policy&placementbinding&placementrule mapping query: SELECT p.payload -> 'metadata' ->> 'name' AS policy, pb.payload -> 'metadata' ->> 'name' AS binding, pr.payload -> 'metadata' ->> 'name' AS placementrule FROM spec.policies p INNER JOIN spec.placementbindings pb ON p.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pb.payload -> 'subjects' @> json_build_array(json_build_object( 'name', p.payload -> 'metadata' ->> 'name', 'kind', p.payload ->> 'kind', 'apiGroup', split_part(p.payload ->> 'apiVersion', '/',1) ))::jsonb INNER JOIN spec.placementrules pr ON pr.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pr.payload -> 'metadata' ->> 'name' = pb.payload -> 'placementRef' ->> 'name' AND pr.payload ->> 'kind' = pb.payload -> 'placementRef' ->> 'kind' AND split_part(pr.payload ->> 'apiVersion', '/', 1) = pb.payload -> 'placementRef' ->> 'apiGroup' •[GIN] 2025/08/18 - 00:42:31 | 200 | 8.005305446s | | GET "/global-hub-api/v1/policies?watch" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] getting status for policy: d93b8769-b496-48da-9ba9-28f2b4073c4c policy query with policy ID: SELECT payload FROM spec.policies WHERE deleted = FALSE AND id = ? policy compliance query with policy ID: SELECT cluster_name,leaf_hub_name,compliance FROM status.compliance WHERE policy_id = ? ORDER BY leaf_hub_name, cluster_name policy&placementbinding&placementrule mapping query: SELECT p.payload -> 'metadata' ->> 'name' AS policy, pb.payload -> 'metadata' ->> 'name' AS binding, pr.payload -> 'metadata' ->> 'name' AS placementrule FROM spec.policies p INNER JOIN spec.placementbindings pb ON p.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pb.payload -> 'subjects' @> json_build_array(json_build_object( 'name', p.payload -> 'metadata' ->> 'name', 'kind', p.payload ->> 'kind', 'apiGroup', split_part(p.payload ->> 'apiVersion', '/',1) ))::jsonb INNER JOIN spec.placementrules pr ON pr.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pr.payload -> 'metadata' ->> 'name' = pb.payload -> 'placementRef' ->> 'name' AND pr.payload ->> 'kind' = pb.payload -> 'placementRef' ->> 'kind' AND split_part(pr.payload ->> 'apiVersion', '/', 1) = pb.payload -> 'placementRef' ->> 'apiGroup' [GIN] 2025/08/18 - 00:42:31 | 200 | 8.06843ms | | GET "/global-hub-api/v1/policy/d93b8769-b496-48da-9ba9-28f2b4073c4c/status" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] getting status for policy: d93b8769-b496-48da-9ba9-28f2b4073c4c policy query with policy ID: SELECT payload FROM spec.policies WHERE deleted = FALSE AND id = ? policy compliance query with policy ID: SELECT cluster_name,leaf_hub_name,compliance FROM status.compliance WHERE policy_id = ? ORDER BY leaf_hub_name, cluster_name policy&placementbinding&placementrule mapping query: SELECT p.payload -> 'metadata' ->> 'name' AS policy, pb.payload -> 'metadata' ->> 'name' AS binding, pr.payload -> 'metadata' ->> 'name' AS placementrule FROM spec.policies p INNER JOIN spec.placementbindings pb ON p.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pb.payload -> 'subjects' @> json_build_array(json_build_object( 'name', p.payload -> 'metadata' ->> 'name', 'kind', p.payload ->> 'kind', 'apiGroup', split_part(p.payload ->> 'apiVersion', '/',1) ))::jsonb INNER JOIN spec.placementrules pr ON pr.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pr.payload -> 'metadata' ->> 'name' = pb.payload -> 'placementRef' ->> 'name' AND pr.payload ->> 'kind' = pb.payload -> 'placementRef' ->> 'kind' AND split_part(pr.payload ->> 'apiVersion', '/', 1) = pb.payload -> 'placementRef' ->> 'apiGroup' returning policy as table... [GIN] 2025/08/18 - 00:42:31 | 200 | 2.972583ms | | GET "/global-hub-api/v1/policy/d93b8769-b496-48da-9ba9-28f2b4073c4c/status" Single Policy Table {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names","priority":0},{"name":"Age","type":"date","format":"","description":"Custom resource definition column (in JSONPath format): .metadata.creationTimestamp","priority":0}],"rows":[{"cells":["policy-config-audit",null],"object":{"apiVersion":"policy.open-cluster-management.io/v1","kind":"Policy","metadata":{"annotations":{"policy.open-cluster-management.io/categories":"AU Audit and Accountability","policy.open-cluster-management.io/controls":"AU-3 Content of Audit Records","policy.open-cluster-management.io/standards":"NIST SP 800-53"},"creationTimestamp":null,"labels":{"env":"production","foo":"bar"},"name":"policy-config-audit","namespace":"default"},"status":{"compliant":"NonCompliant","placement":[{"placementBinding":"binding-config-audit","placementRule":"placement-config-audit"}],"status":[{"clustername":"mc1","clusternamespace":"mc1","compliant":"NonCompliant"},{"clustername":"mc2","clusternamespace":"mc2","compliant":"Compliant"}],"summary":{"complianceClusterNumber":1,"nonComplianceClusterNumber":1}}}}]} got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] getting status for policy: d93b8769-b496-48da-9ba9-28f2b4073c4c policy query with policy ID: SELECT payload FROM spec.policies WHERE deleted = FALSE AND id = ? policy compliance query with policy ID: SELECT cluster_name,leaf_hub_name,compliance FROM status.compliance WHERE policy_id = ? ORDER BY leaf_hub_name, cluster_name policy&placementbinding&placementrule mapping query: SELECT p.payload -> 'metadata' ->> 'name' AS policy, pb.payload -> 'metadata' ->> 'name' AS binding, pr.payload -> 'metadata' ->> 'name' AS placementrule FROM spec.policies p INNER JOIN spec.placementbindings pb ON p.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pb.payload -> 'subjects' @> json_build_array(json_build_object( 'name', p.payload -> 'metadata' ->> 'name', 'kind', p.payload ->> 'kind', 'apiGroup', split_part(p.payload ->> 'apiVersion', '/',1) ))::jsonb INNER JOIN spec.placementrules pr ON pr.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pr.payload -> 'metadata' ->> 'name' = pb.payload -> 'placementRef' ->> 'name' AND pr.payload ->> 'kind' = pb.payload -> 'placementRef' ->> 'kind' AND split_part(pr.payload ->> 'apiVersion', '/', 1) = pb.payload -> 'placementRef' ->> 'apiGroup' •got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned subscription name: , last returned subscription UID: subscription list query: SELECT payload FROM spec.subscriptions WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') [GIN] 2025/08/18 - 00:42:39 | 200 | 8.005705633s | | GET "/global-hub-api/v1/policy/d93b8769-b496-48da-9ba9-28f2b4073c4c/status?watch" [GIN] 2025/08/18 - 00:42:39 | 200 | 8.62607ms | | GET "/global-hub-api/v1/subscriptions" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned subscription name: , last returned subscription UID: subscription list query: SELECT payload FROM spec.subscriptions WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') [GIN] 2025/08/18 - 00:42:39 | 200 | 1.504788ms | | GET "/global-hub-api/v1/subscriptions" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: AND payload -> 'metadata' -> 'labels' @> '{"app": "foo"}' AND NOT (payload -> 'metadata' -> 'labels' @> '{"env": "dev"}') AND NOT (payload -> 'metadata' -> 'labels' ? 'testnokey') limit: last returned subscription name: , last returned subscription UID: subscription list query: SELECT payload FROM spec.subscriptions WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') AND payload -> 'metadata' -> 'labels' @> '{"app": "foo"}' AND NOT (payload -> 'metadata' -> 'labels' @> '{"env": "dev"}') AND NOT (payload -> 'metadata' -> 'labels' ? 'testnokey') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') [GIN] 2025/08/18 - 00:42:39 | 200 | 1.160704ms | | GET "/global-hub-api/v1/subscriptions?labelSelector=app%3Dfoo%2Cenv%21%3Ddev%2C%21testnokey" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned subscription name: , last returned subscription UID: subscription list query: SELECT payload FROM spec.subscriptions WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') Returning as table... [GIN] 2025/08/18 - 00:42:39 | 200 | 1.190366ms | | GET "/global-hub-api/v1/subscriptions" Subs Table {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names","priority":0},{"name":"Age","type":"date","format":"","description":"Custom resource definition column (in JSONPath format): .metadata.creationTimestamp","priority":0}],"rows":[{"cells":["bar-appsub",null],"object":{"apiVersion":"apps.open-cluster-management.io/v1","kind":"Subscription","metadata":{"annotations":{"apps.open-cluster-management.io/git-branch":"main","apps.open-cluster-management.io/git-path":"bar","apps.open-cluster-management.io/reconcile-option":"merge"},"creationTimestamp":null,"labels":{"app":"bar","app.kubernetes.io/part-of":"bar","apps.open-cluster-management.io/reconcile-rate":"medium"},"name":"bar-appsub","namespace":"bar"},"spec":{"channel":"git-application-samples-ns/git-application-samples","placement":{"placementRef":{"kind":"PlacementRule","name":"bar-placement"}}},"status":{"ansiblejobs":{},"lastUpdateTime":null}}},{"cells":["foo-appsub",null],"object":{"apiVersion":"apps.open-cluster-management.io/v1","kind":"Subscription","metadata":{"annotations":{"apps.open-cluster-management.io/git-branch":"main","apps.open-cluster-management.io/git-path":"foo","apps.open-cluster-management.io/reconcile-option":"merge"},"creationTimestamp":null,"labels":{"app":"foo","app.kubernetes.io/part-of":"foo","apps.open-cluster-management.io/reconcile-rate":"medium"},"name":"foo-appsub","namespace":"foo"},"spec":{"channel":"git-application-samples-ns/git-application-samples","placement":{"placementRef":{"kind":"PlacementRule","name":"foo-placement"}}},"status":{"ansiblejobs":{},"lastUpdateTime":null}}}]} got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned subscription name: , last returned subscription UID: subscription list query: SELECT payload FROM spec.subscriptions WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') •[GIN] 2025/08/18 - 00:42:47 | 200 | 8.003700435s | | GET "/global-hub-api/v1/subscriptions?watch" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] getting subscription report for subscription: 33a3e3cb-a542-49dc-b2d4-f9c24633774b subscription query with subscription ID: SELECT payload->'metadata'->>'name', payload->'metadata'->>'namespace' FROM spec.subscriptions WHERE deleted = FALSE AND id = ? subscription report query with subscription name and namespace: SELECT payload FROM status.subscription_reports WHERE payload->'metadata'->>'name'= ? AND payload->'metadata'->>'namespace' = ? [GIN] 2025/08/18 - 00:42:47 | 200 | 1.338816ms | | GET "/global-hub-api/v1/subscriptionreport/33a3e3cb-a542-49dc-b2d4-f9c24633774b" •waiting for server to shut down...2025-08-18 00:42:47.511 UTC [24926] LOG: received fast shutdown request .2025-08-18 00:42:47.511 UTC [24926] LOG: aborting any active transactions 2025-08-18 00:42:47.511 UTC [24938] FATAL: terminating connection due to administrator command 2025-08-18 00:42:47.513 UTC [24926] LOG: background worker "logical replication launcher" (PID 24932) exited with exit code 1 2025-08-18 00:42:47.514 UTC [24927] LOG: shutting down 2025-08-18 00:42:47.514 UTC [24927] LOG: checkpoint starting: shutdown immediate 2025-08-18 00:42:47.529 UTC [24927] LOG: checkpoint complete: wrote 1041 buffers (6.4%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.013 s, sync=0.002 s, total=0.015 s; sync files=481, longest=0.001 s, average=0.001 s; distance=5321 kB, estimate=5321 kB; lsn=0/1A10E78, redo lsn=0/1A10E78 2025-08-18 00:42:47.536 UTC [24926] LOG: database system is shut down done server stopped Ran 6 of 6 Specs in 36.939 seconds SUCCESS! -- 6 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestNonK8sAPI (36.94s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/manager/api 37.032s failed to get CustomResourceDefinition for subscriptionreports.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptionreports.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-yctml9n0:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for subscriptions.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptions.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-yctml9n0:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for policies.policy.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "policies.policy.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-yctml9n0:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope=== RUN TestController Running Suite: Manager Controller Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/controller =================================================================================================================================== Random Seed: 1755477730 Will run 12 of 12 specs The files belonging to this database system will be owned by user "1002500000". This user must also own the server process. The database cluster will be initialized with locale "C". The default database encoding has accordingly been set to "SQL_ASCII". The default text search configuration will be set to "english". Data page checksums are disabled. creating directory /tmp/tmp/embedded-postgres-go-17838/extracted/data ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... UTC creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok Success. You can now start the database server using: /tmp/tmp/embedded-postgres-go-17838/extracted/bin/pg_ctl -D /tmp/tmp/embedded-postgres-go-17838/extracted/data -l logfile start waiting for server to start....2025-08-18 00:42:25.255 UTC [25208] LOG: starting PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit 2025-08-18 00:42:25.255 UTC [25208] LOG: listening on IPv6 address "::1", port 17838 2025-08-18 00:42:25.255 UTC [25208] LOG: listening on IPv4 address "127.0.0.1", port 17838 2025-08-18 00:42:25.256 UTC [25208] LOG: listening on Unix socket "/tmp/.s.PGSQL.17838" 2025-08-18 00:42:25.258 UTC [25212] LOG: database system was shut down at 2025-08-18 00:42:25 UTC 2025-08-18 00:42:25.261 UTC [25208] LOG: database system is ready to accept connections done server started script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. script 1.upgrade.sql executed successfully. script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. Time max 2025-09-01 00:00:00 +0000 UTC min 2024-02-01 00:00:00 +0000 UTC expiredTime 2024-01-01 00:00:00 +0000 UTC the expired partition table is created: event.local_policies_2024_01 the expired partition table is created: event.local_root_policies_2024_01 the expired partition table is created: history.local_compliance_2024_01 the expired partition table is created: event.managed_clusters_2024_01 the min partition table is created: event.local_policies_2024_02 the min partition table is created: event.local_root_policies_2024_02 the min partition table is created: history.local_compliance_2024_02 the min partition table is created: event.managed_clusters_2024_02 the deleted record is created: status.managed_clusters the deleted record is created: status.leaf_hubs the deleted record is created: local_spec.policies deleting the expired partition table: event.local_policies_2024_01 deleting the expired partition table: event.local_root_policies_2024_01 deleting the expired partition table: history.local_compliance_2024_01 deleting the expired partition table: event.managed_clusters_2024_01 2025-08-18T00:42:25.705Z INFO data-retention task/data_retention.go:115 create partition tabletableevent.local_policies_2025_09start2025-09-01end2025-10-01 2025-08-18T00:42:25.717Z INFO data-retention task/data_retention.go:124 delete partition tabletableevent.local_policies_2024_01 2025-08-18T00:42:25.723Z INFO data-retention task/data_retention.go:115 create partition tabletableevent.local_root_policies_2025_09start2025-09-01end2025-10-01 2025-08-18T00:42:25.744Z INFO data-retention task/data_retention.go:124 delete partition tabletableevent.local_root_policies_2024_01 2025-08-18T00:42:25.752Z INFO data-retention task/data_retention.go:115 create partition tabletablehistory.local_compliance_2025_09start2025-09-01end2025-10-01 2025-08-18T00:42:25.761Z INFO data-retention task/data_retention.go:124 delete partition tabletablehistory.local_compliance_2024_01 2025-08-18T00:42:25.764Z INFO data-retention task/data_retention.go:115 create partition tabletableevent.managed_clusters_2025_09start2025-09-01end2025-10-01 2025-08-18T00:42:25.783Z INFO data-retention task/data_retention.go:124 delete partition tabletableevent.managed_clusters_2024_01 2025-08-18T00:42:25.785Z INFO data-retention task/data_retention.go:135 delete recordstablestatus.managed_clustersbefore2024-02-01 2025-08-18T00:42:25.787Z INFO data-retention task/data_retention.go:135 delete recordstablestatus.leaf_hubsbefore2024-02-01 2025-08-18T00:42:25.788Z INFO data-retention task/data_retention.go:135 delete recordstablelocal_spec.policiesbefore2024-02-01 2025-08-18T00:42:25.791Z INFO data-retention task/data_retention.go:99 finish runningnextRun2025-08-25 00:00:00 deleting the expired record in table: status.managed_clusters deleting the expired record in table: status.leaf_hubs deleting the expired record in table: local_spec.policies •Time Min 2024_02 Max 2025_09 table_name(event.local_policies) | min(local_policies_2024_02) | max(local_policies_2025_09) | min_deletion(0001-01-01) table_name(event.local_root_policies) | min(local_root_policies_2024_02) | max(local_root_policies_2025_09) | min_deletion(0001-01-01) table_name(history.local_compliance) | min(local_compliance_2024_02) | max(local_compliance_2025_09) | min_deletion(0001-01-01) table_name(event.managed_clusters) | min(managed_clusters_2024_02) | max(managed_clusters_2025_09) | min_deletion(0001-01-01) table_name(status.managed_clusters) | min() | max() | min_deletion(0001-01-01) table_name(status.leaf_hubs) | min() | max() | min_deletion(0001-01-01) table_name(local_spec.policies) | min() | max() | min_deletion(0001-01-01) •set local compliance job scheduleAt 00:00 2025-08-18T00:42:26.702Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "0001-01-01 00:00:00"} found the following compliance history: 2025-08-18T00:42:26.720Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:26.722Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 5, "offset": 0} 2025-08-18T00:42:26.723Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 5} 2025-08-18T00:42:26.723Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-19 00:00:00"} found the following compliance history: 00000000-0000-0000-0000-000000000001 00000003-0000-0000-0000-000000000001 compliant 2025-08-18 00:00:00 +0000 +0000 0 00000000-0000-0000-0000-000000000001 00000003-0000-0000-0000-000000000002 compliant 2025-08-18 00:00:00 +0000 +0000 0 00000000-0000-0000-0000-000000000001 00000003-0000-0000-0000-000000000003 compliant 2025-08-18 00:00:00 +0000 +0000 0 00000000-0000-0000-0000-000000000001 00000003-0000-0000-0000-000000000004 compliant 2025-08-18 00:00:00 +0000 +0000 0 00000000-0000-0000-0000-000000000001 00000003-0000-0000-0000-000000000005 compliant 2025-08-18 00:00:00 +0000 +0000 0 found the following compliance history job log: >> 2025-08-18 00:42:26 2025-08-18 00:42:26 local-compliance-history 5 5 0 none •00000000-0000-0000-0000-000000000001 00000003-0000-0000-0000-000000000001 non_compliant 2025-08-18 00:00:00 +0000 +0000 1 •00000000-0000-0000-0000-000000000001 00000003-0000-0000-0000-000000000001 non_compliant 2025-08-18 00:00:00 +0000 +0000 2 •00000000-0000-0000-0000-000000000001 00000003-0000-0000-0000-000000000001 unknown 2025-08-18 00:00:00 +0000 +0000 3 •2025-08-18T00:42:28.727Z INFO cronjob/scheduler.go:66 set SyncLocalCompliance job {"scheduleAt": "00:00"} 2025-08-18T00:42:28.727Z INFO cronjob/scheduler.go:75 set DataRetention jobscheduleAt00:00 2025-08-18T00:42:28.727Z INFO cronjob/scheduler.go:103 launch the job {"name": "data-retention"} 2025-08-18T00:42:28.727Z INFO cronjob/scheduler.go:108 failed to launch the unknow job immediately {"name": "local-compliance-history"} 2025-08-18T00:42:28.727Z INFO cronjob/scheduler.go:108 failed to launch the unknow job immediately {"name": "unexpected_name"} •2025-08-18T00:42:28.727Z INFO cronjob/scheduler.go:86 start job scheduler 2025-08-18T00:42:28.727Z INFO cronjob/scheduler.go:108 failed to launch the unknow job immediately {"name": ""} 2025-08-18T00:42:28.727Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "0001-01-01 00:00:00"} 2025-08-18T00:42:28.728Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:28.728Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:28.738Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:28.738Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:29"} 2025-08-18T00:42:28.748Z INFO controller/controller.go:175 Starting EventSource {"controller": "backupPvcController", "controllerGroup": "", "controllerKind": "PersistentVolumeClaim", "source": "kind source: *v1.PersistentVolumeClaim"} 2025-08-18T00:42:28.748Z INFO controller/controller.go:183 Starting Controller {"controller": "backupPvcController", "controllerGroup": "", "controllerKind": "PersistentVolumeClaim"} 2025-08-18T00:42:28.793Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.overrides.components[0].configOverrides" ••2025-08-18T00:42:28.848Z INFO controller/controller.go:217 Starting workers {"controller": "backupPvcController", "controllerGroup": "", "controllerKind": "PersistentVolumeClaim", "worker count": 1} 2025-08-18T00:42:29.728Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:29"} 2025-08-18T00:42:29.728Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:29.729Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:29.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:29.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:30"} 2025-08-18T00:42:30.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:30"} 2025-08-18T00:42:30.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:30.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:30.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:30.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:31"} 2025-08-18T00:42:31.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:31"} 2025-08-18T00:42:31.730Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:31.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:31.731Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:31.731Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:32"} 2025-08-18T00:42:32.730Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:32"} 2025-08-18T00:42:32.731Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:32.731Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:32.732Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:32.732Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:33"} 2025-08-18T00:42:33.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:33"} 2025-08-18T00:42:33.730Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:33.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:33.731Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:33.731Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:34"} ••heartbeat-hub01 2025-08-18 00:42:33.925111 +0000 +0000 active heartbeat-hub02 2025-08-18 00:40:33.925111 +0000 +0000 active heartbeat-hub03 2025-08-18 00:42:13.925111 +0000 +0000 active heartbeat-hub04 2025-08-18 00:39:33.925111 +0000 +0000 inactive >> heartbeat: heartbeat-hub04 heartbeat-hub04 2025-08-18 00:41:33.925111437 +0000 UTC m=-36.662993434 inactive 2025-08-18T00:42:33.939Z INFO hubmanagement/hub_management.go:83 hub management status switch frequency {"interval": "1s"} 2025-08-18T00:42:34.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:34"} 2025-08-18T00:42:34.730Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:34.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:34.731Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:34.731Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:35"} 2025-08-18T00:42:35.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:35"} 2025-08-18T00:42:35.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:35.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:35.731Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:35.731Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:36"} 2025-08-18T00:42:36.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:36"} 2025-08-18T00:42:36.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:36.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:36.731Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:36.731Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:37"} >> hub management[90s]: heartbeat-hub02 -> inactive, heartbeat-hub04 -> active heartbeat-hub01 2025-08-18 00:42:33.925111 +0000 +0000 active heartbeat-hub03 2025-08-18 00:42:13.925111 +0000 +0000 active heartbeat-hub02 2025-08-18 00:40:33.925111 +0000 +0000 inactive heartbeat-hub04 2025-08-18 00:41:33.925111 +0000 +0000 active •2025-08-18T00:42:37.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:37"} 2025-08-18T00:42:37.730Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:37.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:37.731Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:37.731Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:38"} 2025-08-18T00:42:38.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:38"} 2025-08-18T00:42:38.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:38.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:38.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:38.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:39"} 2025-08-18T00:42:39.728Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:39"} 2025-08-18T00:42:39.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:39.729Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:39.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:39.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:40"} 2025-08-18T00:42:40.728Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:40"} 2025-08-18T00:42:40.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:40.729Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:40.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:40.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:41"} 2025-08-18T00:42:41.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:41"} 2025-08-18T00:42:41.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:41.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:41.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:41.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:42"} 2025-08-18T00:42:42.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:42"} 2025-08-18T00:42:42.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:42.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:42.731Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:42.731Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:43"} 2025-08-18T00:42:43.728Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:43"} 2025-08-18T00:42:43.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:43.729Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:43.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:43.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:44"} 2025-08-18T00:42:44.728Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:44"} 2025-08-18T00:42:44.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:44.729Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:44.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:44.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:45"} 2025-08-18T00:42:45.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:45"} 2025-08-18T00:42:45.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:45.729Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:45.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:45.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:46"} 2025-08-18T00:42:46.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:46"} 2025-08-18T00:42:46.730Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:46.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:46.731Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:46.731Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:47"} 2025-08-18T00:42:47.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:47"} 2025-08-18T00:42:47.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:47.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:47.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:47.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:48"} 2025-08-18T00:42:48.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:48"} 2025-08-18T00:42:48.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:48.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:48.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:48.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:49"} 2025-08-18T00:42:49.728Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:49"} 2025-08-18T00:42:49.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:49.729Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:49.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:49.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:50"} 2025-08-18T00:42:50.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:50"} 2025-08-18T00:42:50.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:50.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:50.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:50.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:51"} 2025-08-18T00:42:51.728Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:51"} 2025-08-18T00:42:51.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:51.729Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:51.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:51.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:52"} 2025-08-18T00:42:52.728Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:52"} 2025-08-18T00:42:52.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:52.729Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:52.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:52.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:53"} 2025-08-18T00:42:53.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:53"} 2025-08-18T00:42:53.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:53.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:53.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:53.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:54"} 2025-08-18T00:42:54.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:54"} 2025-08-18T00:42:54.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:54.729Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:54.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:54.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:55"} 2025-08-18T00:42:55.728Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:55"} 2025-08-18T00:42:55.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:55.729Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:55.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:55.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:56"} 2025-08-18T00:42:56.728Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:56"} 2025-08-18T00:42:56.728Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:56.729Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:56.729Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:56.729Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:57"} 2025-08-18T00:42:57.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:57"} 2025-08-18T00:42:57.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:57.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:57.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:57.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:58"} 2025-08-18T00:42:58.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:58"} 2025-08-18T00:42:58.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:58.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:58.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:58.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:42:59"} 2025-08-18T00:42:59.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:42:59"} 2025-08-18T00:42:59.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:42:59.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:42:59.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:42:59.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:43:00"} 2025-08-18T00:43:00.729Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:43:00"} 2025-08-18T00:43:00.729Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:43:00.730Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:43:00.730Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:43:00.730Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:43:01"} waiting for server to shut down....2025-08-18 00:43:00.970 UTC [25208] LOG: received fast shutdown request 2025-08-18 00:43:00.970 UTC [25208] LOG: aborting any active transactions 2025-08-18 00:43:00.970 UTC [25227] FATAL: terminating connection due to administrator command 2025-08-18 00:43:00.970 UTC [25240] FATAL: terminating connection due to administrator command 2025-08-18 00:43:00.971 UTC [25208] LOG: background worker "logical replication launcher" (PID 25215) exited with exit code 1 2025-08-18 00:43:00.972 UTC [25210] LOG: shutting down 2025-08-18 00:43:00.972 UTC [25210] LOG: checkpoint starting: shutdown immediate 2025-08-18 00:43:00.984 UTC [25210] LOG: checkpoint complete: wrote 1064 buffers (6.5%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.011 s, sync=0.003 s, total=0.013 s; sync files=523, longest=0.001 s, average=0.001 s; distance=5606 kB, estimate=5606 kB; lsn=0/1A58180, redo lsn=0/1A58180 2025-08-18 00:43:00.991 UTC [25208] LOG: database system is shut down done server stopped 2025-08-18T00:43:01.070Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables Ran 12 of 12 Specs in 50.395 seconds 2025-08-18T00:43:01.071Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 0 Skipped 2025-08-18T00:43:01.071Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "backupPvcController", "controllerGroup": "", "controllerKind": "PersistentVolumeClaim"} 2025-08-18T00:43:01.071Z INFO controller/controller.go:239 All workers finished {"controller": "backupPvcController", "controllerGroup": "", "controllerKind": "PersistentVolumeClaim"} 2025-08-18T00:43:01.071Z INFO manager/internal.go:550 Stopping and waiting for caches --- PASS: TestController (50.40s) PASS 2025-08-18T00:43:01.071Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:43:01.071Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:43:01.071Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager ok github.com/stolostron/multicluster-global-hub/test/integration/manager/controller 50.494s failed to get CustomResourceDefinition for subscriptionreports.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptionreports.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-yctml9n0:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for subscriptions.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptions.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-yctml9n0:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for policies.policy.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "policies.policy.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-yctml9n0:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope=== RUN TestController Running Suite: Manager Controller Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/migration ================================================================================================================================== Random Seed: 1755477730 Will run 20 of 20 specs The files belonging to this database system will be owned by user "1002500000". This user must also own the server process. The database cluster will be initialized with locale "C". The default database encoding has accordingly been set to "SQL_ASCII". The default text search configuration will be set to "english". Data page checksums are disabled. creating directory /tmp/tmp/embedded-postgres-go-12053/extracted/data ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... UTC creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok Success. You can now start the database server using: /tmp/tmp/embedded-postgres-go-12053/extracted/bin/pg_ctl -D /tmp/tmp/embedded-postgres-go-12053/extracted/data -l logfile start waiting for server to start....2025-08-18 00:42:24.895 UTC [25186] LOG: starting PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit 2025-08-18 00:42:24.895 UTC [25186] LOG: listening on IPv6 address "::1", port 12053 2025-08-18 00:42:24.895 UTC [25186] LOG: listening on IPv4 address "127.0.0.1", port 12053 2025-08-18 00:42:24.895 UTC [25186] LOG: listening on Unix socket "/tmp/.s.PGSQL.12053" 2025-08-18 00:42:24.897 UTC [25189] LOG: database system was shut down at 2025-08-18 00:42:24 UTC 2025-08-18 00:42:24.900 UTC [25186] LOG: database system is ready to accept connections done server started script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. script 1.upgrade.sql executed successfully. script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. 2025-08-18T00:42:25.398Z INFO controller/controller.go:175 Starting EventSource {"controller": "migration-ctrl", "controllerGroup": "global-hub.open-cluster-management.io", "controllerKind": "ManagedClusterMigration", "source": "kind source: *v1alpha1.ManagedClusterMigration"} 2025-08-18T00:42:25.398Z INFO controller/controller.go:175 Starting EventSource {"controller": "migration-ctrl", "controllerGroup": "global-hub.open-cluster-management.io", "controllerKind": "ManagedClusterMigration", "source": "kind source: *v1beta1.ManagedServiceAccount"} 2025-08-18T00:42:25.398Z INFO controller/controller.go:175 Starting EventSource {"controller": "migration-ctrl", "controllerGroup": "global-hub.open-cluster-management.io", "controllerKind": "ManagedClusterMigration", "source": "kind source: *v1.Secret"} 2025-08-18T00:42:25.398Z INFO controller/controller.go:183 Starting Controller {"controller": "migration-ctrl", "controllerGroup": "global-hub.open-cluster-management.io", "controllerKind": "ManagedClusterMigration"} ••••2025-08-18T00:42:25.519Z INFO controller/controller.go:217 Starting workers {"controller": "migration-ctrl", "controllerGroup": "global-hub.open-cluster-management.io", "controllerKind": "ManagedClusterMigration", "worker count": 1} 2025-08-18T00:42:25.602Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477745456356838 2025-08-18T00:42:25.602Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:42:25.607Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:25.611Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477745456356838 (phase: Validating) 2025-08-18T00:42:25.611Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477745456356838 2025-08-18T00:42:25.615Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 02c0ee9f-b069-4123-8045-cad521db1f50 2025-08-18T00:42:25.616Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:42:25.616Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:42:25.616Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:42:25.616Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:42:25.617Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477745456356838 2025-08-18T00:42:25.617Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:42:25.617Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:42:25.623Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477745456356838 2025-08-18T00:42:25.623Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477745456356838 (phase: Initializing) 2025-08-18T00:42:25.623Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477745456356838 2025-08-18T00:42:25.623Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:25.627Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477745456356838/migration-test-1755477745456356838) to be created 2025-08-18T00:42:25.627Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477745456356838/migration-test-1755477745456356838) to be created, phase: Initializing 2025-08-18T00:42:25.631Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477745456356838 2025-08-18T00:42:25.631Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477745456356838 (phase: Initializing) 2025-08-18T00:42:25.631Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477745456356838 2025-08-18T00:42:25.631Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:25.631Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477745456356838/migration-test-1755477745456356838) to be created 2025-08-18T00:42:25.820Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477745456356838 2025-08-18T00:42:25.820Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477745456356838 •2025-08-18T00:42:25.829Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 02c0ee9f-b069-4123-8045-cad521db1f50 2025-08-18T00:42:25.830Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477745456356838 2025-08-18T00:42:25.830Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:25.830Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:25.869Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477745821156280 2025-08-18T00:42:25.869Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:42:25.872Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:25.875Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477745821156280 (phase: Validating) 2025-08-18T00:42:25.875Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477745821156280 2025-08-18T00:42:25.878Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: f9b92d9f-86c4-438e-990c-ab7c114c7d2d 2025-08-18T00:42:25.878Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:42:25.878Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:42:25.878Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:42:25.878Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:42:25.878Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ClusterNotFound): no valid managed clusters found in database: [non-existent-cluster], phase: Failed 2025-08-18T00:42:25.881Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477745821156280 2025-08-18T00:42:25.881Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477745821156280 (phase: Validating) 2025-08-18T00:42:25.881Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477745821156280 2025-08-18T00:42:25.881Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:42:25.882Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:42:25.882Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:42:25.882Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:42:25.882Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ClusterNotFound): no valid managed clusters found in database: [non-existent-cluster], phase: Failed 2025-08-18T00:42:25.896Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477745821156280 2025-08-18T00:42:25.896Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:25.896Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:26.083Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477745821156280 2025-08-18T00:42:26.083Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477745821156280 •2025-08-18T00:42:26.087Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: f9b92d9f-86c4-438e-990c-ab7c114c7d2d 2025-08-18T00:42:26.087Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477745821156280 2025-08-18T00:42:26.087Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:26.087Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:26.310Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477746084356900 2025-08-18T00:42:26.310Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:42:26.314Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:26.330Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:26.334Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477746084356900 (phase: Validating) 2025-08-18T00:42:26.334Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477746084356900 2025-08-18T00:42:26.339Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: a684ea4f-8d2f-4966-b8f1-312651063b6c 2025-08-18T00:42:26.339Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:42:26.339Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:42:26.339Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:42:26.339Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:42:26.343Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477746084356900 2025-08-18T00:42:26.343Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:42:26.343Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:42:26.350Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477746084356900 2025-08-18T00:42:26.350Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477746084356900 (phase: Initializing) 2025-08-18T00:42:26.350Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477746084356900 2025-08-18T00:42:26.350Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:26.354Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477746084356900/migration-test-1755477746084356900) to be created 2025-08-18T00:42:26.354Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477746084356900/migration-test-1755477746084356900) to be created, phase: Initializing 2025-08-18T00:42:26.359Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477746084356900 2025-08-18T00:42:26.359Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477746084356900 (phase: Initializing) 2025-08-18T00:42:26.359Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477746084356900 2025-08-18T00:42:26.359Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:26.359Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477746084356900/migration-test-1755477746084356900) to be created 2025-08-18T00:42:26.525Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "status.healthCheck" 2025-08-18T00:42:30.623Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477745456356838 2025-08-18T00:42:30.623Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477746084356900 (phase: Initializing) 2025-08-18T00:42:30.623Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477746084356900 2025-08-18T00:42:30.623Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:30.623Z INFO migration/migration_initializing.go:142 migration initializing finished 2025-08-18T00:42:30.623Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(ResourceInitialized): All source and target hubs have been successfully initialized, phase: Deploying 2025-08-18T00:42:30.631Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:42:30.631Z INFO migration/migration_deploying.go:50 migration deploying to source hub: hub1-test-1755477746084356900 2025-08-18T00:42:30.631Z INFO migration/migration_pending.go:101 update condition ResourceDeployed(Waiting): waiting for resources to be prepared in the source hub hub1-test-1755477746084356900, phase: Deploying 2025-08-18T00:42:30.645Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477746084356900 2025-08-18T00:42:30.645Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477746084356900 (phase: Deploying) 2025-08-18T00:42:30.645Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477746084356900 2025-08-18T00:42:30.645Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:42:30.796Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477746084356900 2025-08-18T00:42:30.796Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477746084356900 •2025-08-18T00:42:30.802Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: a684ea4f-8d2f-4966-b8f1-312651063b6c 2025-08-18T00:42:30.802Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477746084356900 2025-08-18T00:42:30.802Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:30.802Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:30.848Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477750796654228 2025-08-18T00:42:30.848Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:42:30.865Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:30.872Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477750796654228 (phase: Validating) 2025-08-18T00:42:30.873Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477750796654228 2025-08-18T00:42:30.877Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 7cbfc866-e311-473b-8c6c-da8c3b5d639b 2025-08-18T00:42:30.880Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:42:30.880Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:42:30.880Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:42:30.880Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:42:30.881Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477750796654228 2025-08-18T00:42:30.881Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:42:30.881Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:42:30.888Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477750796654228 2025-08-18T00:42:30.888Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477750796654228 (phase: Initializing) 2025-08-18T00:42:30.888Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477750796654228 2025-08-18T00:42:30.888Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:30.892Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477750796654228/migration-test-1755477750796654228) to be created 2025-08-18T00:42:30.892Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477750796654228/migration-test-1755477750796654228) to be created, phase: Initializing 2025-08-18T00:42:30.901Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477750796654228 2025-08-18T00:42:30.902Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477750796654228 (phase: Initializing) 2025-08-18T00:42:30.902Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477750796654228 2025-08-18T00:42:30.902Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:30.902Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477750796654228/migration-test-1755477750796654228) to be created 2025-08-18T00:42:31.351Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477746084356900 2025-08-18T00:42:31.351Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477750796654228 (phase: Initializing) 2025-08-18T00:42:31.351Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477750796654228 2025-08-18T00:42:31.351Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:31.351Z INFO migration/migration_initializing.go:96 sent initializing event to target hub hub2-test-1755477750796654228 2025-08-18T00:42:31.351Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Error): initializing source hub hub1-test-1755477750796654228 with err :initialization failed, phase: Rollbacking 2025-08-18T00:42:31.355Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:42:31.355Z INFO migration/migration_rollbacking.go:59 sending rollback event to source hub: hub1-test-1755477750796654228 2025-08-18T00:42:31.355Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477750796654228 to complete Initializing stage rollback, phase: Rollbacking 2025-08-18T00:42:31.360Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477750796654228 2025-08-18T00:42:31.360Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477750796654228 (phase: Rollbacking) 2025-08-18T00:42:31.360Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477750796654228 2025-08-18T00:42:31.360Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:42:31.395Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477750796654228 2025-08-18T00:42:31.395Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477750796654228 •2025-08-18T00:42:31.401Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 7cbfc866-e311-473b-8c6c-da8c3b5d639b 2025-08-18T00:42:31.401Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477750796654228 2025-08-18T00:42:31.401Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:31.401Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:31.429Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477751395898753 2025-08-18T00:42:31.429Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:42:31.436Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:31.442Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477751395898753 (phase: Validating) 2025-08-18T00:42:31.442Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477751395898753 2025-08-18T00:42:31.447Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: ced633de-23fc-4ab7-b130-3e40ef247f73 2025-08-18T00:42:31.447Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:42:31.447Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:42:31.447Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:42:31.447Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:42:31.448Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477751395898753 2025-08-18T00:42:31.448Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:42:31.448Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:42:31.465Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:42:31.471Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477751395898753 2025-08-18T00:42:31.471Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477751395898753 (phase: Initializing) 2025-08-18T00:42:31.471Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477751395898753 2025-08-18T00:42:31.471Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:31.478Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477751395898753/migration-test-1755477751395898753) to be created 2025-08-18T00:42:31.478Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477751395898753/migration-test-1755477751395898753) to be created, phase: Initializing 2025-08-18T00:42:31.483Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477751395898753 2025-08-18T00:42:31.484Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477751395898753 (phase: Initializing) 2025-08-18T00:42:31.484Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477751395898753 2025-08-18T00:42:31.484Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:31.484Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477751395898753/migration-test-1755477751395898753) to be created 2025-08-18T00:42:35.646Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477745456356838 2025-08-18T00:42:35.646Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477751395898753 (phase: Initializing) 2025-08-18T00:42:35.646Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477751395898753 2025-08-18T00:42:35.646Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:35.646Z INFO migration/migration_initializing.go:142 migration initializing finished 2025-08-18T00:42:35.646Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(ResourceInitialized): All source and target hubs have been successfully initialized, phase: Deploying 2025-08-18T00:42:35.651Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:42:35.651Z INFO migration/migration_deploying.go:50 migration deploying to source hub: hub1-test-1755477751395898753 2025-08-18T00:42:35.651Z INFO migration/migration_pending.go:101 update condition ResourceDeployed(Waiting): waiting for resources to be prepared in the source hub hub1-test-1755477751395898753, phase: Deploying 2025-08-18T00:42:35.655Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477751395898753 2025-08-18T00:42:35.655Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477751395898753 (phase: Deploying) 2025-08-18T00:42:35.655Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477751395898753 2025-08-18T00:42:35.655Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:42:35.889Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477750796654228 2025-08-18T00:42:35.889Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477751395898753 (phase: Deploying) 2025-08-18T00:42:35.889Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477751395898753 2025-08-18T00:42:35.889Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:42:35.889Z INFO migration/migration_deploying.go:92 migration deploying finished 2025-08-18T00:42:35.889Z INFO migration/migration_pending.go:101 update condition ResourceDeployed(ResourcesDeployed): Resources have been successfully deployed to the target hub cluster, phase: Registering 2025-08-18T00:42:35.894Z INFO migration/migration_registering.go:34 migration registering 2025-08-18T00:42:35.894Z INFO migration/migration_registering.go:49 migration registering: hub1-test-1755477751395898753 2025-08-18T00:42:35.894Z INFO migration/migration_pending.go:101 update condition ClusterRegistered(Waiting): waiting for managed clusters to migrating from source hub hub1-test-1755477751395898753, phase: Registering 2025-08-18T00:42:35.899Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477751395898753 2025-08-18T00:42:35.899Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477751395898753 (phase: Registering) 2025-08-18T00:42:35.899Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477751395898753 2025-08-18T00:42:35.899Z INFO migration/migration_registering.go:34 migration registering 2025-08-18T00:42:36.361Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477746084356900 2025-08-18T00:42:36.361Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477751395898753 (phase: Registering) 2025-08-18T00:42:36.361Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477751395898753 2025-08-18T00:42:36.361Z INFO migration/migration_registering.go:34 migration registering 2025-08-18T00:42:36.361Z INFO migration/migration_pending.go:101 update condition ClusterRegistered(ClusterRegistered): All migrated clusters have been successfully registered, phase: Cleaning 2025-08-18T00:42:36.367Z INFO migration/migration_cleaning.go:37 migration start cleaning 2025-08-18T00:42:36.370Z INFO migration/migration_pending.go:101 update condition ResourceCleaned(Waiting): The target hub hub2-test-1755477751395898753 is cleaning, phase: Cleaning 2025-08-18T00:42:36.375Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477751395898753 2025-08-18T00:42:36.375Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477751395898753 (phase: Cleaning) 2025-08-18T00:42:36.375Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477751395898753 2025-08-18T00:42:36.375Z INFO migration/migration_cleaning.go:37 migration start cleaning 2025-08-18T00:42:36.377Z ERROR migration/migration_cleaning.go:53 failed to delete the managedServiceAccount: hub2-test-1755477751395898753/migration-test-1755477751395898753 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).cleaning /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_cleaning.go:53 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_controller.go:205 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:36.377Z INFO migration/migration_pending.go:101 update condition ResourceCleaned(Error): [Warning - Cleanup Issues] failed to delete managedServiceAccount: managedserviceaccounts.authentication.open-cluster-management.io "migration-test-1755477751395898753" not found., phase: Completed 2025-08-18T00:42:36.383Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477751395898753 2025-08-18T00:42:36.383Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:36.383Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:36.384Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477751395898753 2025-08-18T00:42:36.384Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:36.384Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:36.471Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477751395898753 2025-08-18T00:42:36.471Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:36.471Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:40.656Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477745456356838 2025-08-18T00:42:40.656Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:40.656Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:40.899Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477750796654228 2025-08-18T00:42:40.899Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:40.899Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:41.375Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477746084356900 2025-08-18T00:42:41.376Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:41.376Z INFO migration/migration_controller.go:135 no desired managedclustermigration found ------------------------------ • [FAILED] [14.617 seconds] Migration Phase Transitions - Simplified [It] should complete full successful migration lifecycle /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/migration/migration_phase_test.go:201 Timeline >> STEP: Creating migration CR @ 08/18/25 00:42:31.426 STEP: Verifying validation and reaching Initializing phase @ 08/18/25 00:42:31.43 STEP: Creating token secret @ 08/18/25 00:42:31.631 STEP: Progressing through Initializing -> Deploying @ 08/18/25 00:42:31.742 STEP: Progressing through Deploying -> Registering @ 08/18/25 00:42:35.8 STEP: Progressing through Registering -> Cleaning @ 08/18/25 00:42:36.001 [FAILED] in [It] - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/migration/migration_phase_test.go:298 @ 08/18/25 00:42:46.002 STEP: Cleaning up test resources @ 08/18/25 00:42:46.002 STEP: Cleaning up migration CR and waiting for deletion @ 08/18/25 00:42:46.01 STEP: Ensuring no migrations are running before next test @ 08/18/25 00:42:46.012 << Timeline 2025-08-18T00:42:46.013Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477751395898753 2025-08-18T00:42:46.013Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477751395898753 [FAILED] Timed out after 10.000s. Expected success, but got an error: <*errors.errorString | 0xc0004850c0>: expected phase Cleaning, got Completed { s: "expected phase Cleaning, got Completed", } In [It] at: /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/migration/migration_phase_test.go:298 @ 08/18/25 00:42:46.002 ------------------------------ 2025-08-18T00:42:46.016Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: ced633de-23fc-4ab7-b130-3e40ef247f73 2025-08-18T00:42:46.016Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477751395898753 2025-08-18T00:42:46.016Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:46.016Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:46.228Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477766013314936 2025-08-18T00:42:46.228Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:42:46.230Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:46.243Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:46.246Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477766013314936 (phase: Validating) 2025-08-18T00:42:46.246Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477766013314936 2025-08-18T00:42:46.248Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 19e8ffaf-373b-4587-882f-d21561a39aa3 2025-08-18T00:42:46.248Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:42:46.248Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:42:46.248Z INFO migration/migration_pending.go:101 update condition ResourceValidated(HubClusterInvalid): source hub non-existent-hub: ManagedCluster.cluster.open-cluster-management.io "non-existent-hub" not found, phase: Failed 2025-08-18T00:42:46.251Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477766013314936 2025-08-18T00:42:46.251Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:46.251Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:46.251Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477766013314936 2025-08-18T00:42:46.251Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:46.251Z INFO migration/migration_controller.go:135 no desired managedclustermigration found •2025-08-18T00:42:46.439Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477766013314936 2025-08-18T00:42:46.440Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477766013314936 2025-08-18T00:42:46.443Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 19e8ffaf-373b-4587-882f-d21561a39aa3 2025-08-18T00:42:46.443Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477766013314936 2025-08-18T00:42:46.443Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:46.443Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:46.457Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477766439951716 2025-08-18T00:42:46.457Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:42:46.459Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:46.472Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:46.476Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477766439951716 (phase: Validating) 2025-08-18T00:42:46.476Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477766439951716 2025-08-18T00:42:46.479Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: bf1733a2-de5d-4a06-9f00-9843e22a77d8 2025-08-18T00:42:46.479Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:42:46.479Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:42:46.479Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:42:46.479Z INFO migration/migration_pending.go:101 update condition ResourceValidated(HubClusterInvalid): destination hub non-existent-hub: ManagedCluster.cluster.open-cluster-management.io "non-existent-hub" not found, phase: Failed 2025-08-18T00:42:46.493Z INFO migration/migration_pending.go:101 update condition ResourceValidated(HubClusterInvalid): destination hub non-existent-hub: ManagedCluster.cluster.open-cluster-management.io "non-existent-hub" not found, phase: Failed 2025-08-18T00:42:46.497Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477766439951716 2025-08-18T00:42:46.497Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:46.497Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:46.497Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477766439951716 2025-08-18T00:42:46.497Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:46.497Z INFO migration/migration_controller.go:135 no desired managedclustermigration found •2025-08-18T00:42:46.668Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477766439951716 2025-08-18T00:42:46.669Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477766439951716 2025-08-18T00:42:46.671Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: bf1733a2-de5d-4a06-9f00-9843e22a77d8 2025-08-18T00:42:46.671Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477766439951716 2025-08-18T00:42:46.671Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:46.671Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:47.086Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477766668832019 2025-08-18T00:42:47.086Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:42:47.088Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:47.101Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:47.104Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477766668832019 (phase: Validating) 2025-08-18T00:42:47.104Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477766668832019 2025-08-18T00:42:47.106Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 8c106174-0736-4869-af0c-ab3d4124b481 2025-08-18T00:42:47.106Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:42:47.106Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:42:47.106Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:42:47.106Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:42:47.107Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ClusterNotFound): no valid managed clusters found in database: [non-existent-cluster], phase: Failed 2025-08-18T00:42:47.109Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477766668832019 2025-08-18T00:42:47.109Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:47.109Z INFO migration/migration_controller.go:135 no desired managedclustermigration found •2025-08-18T00:42:47.299Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477766668832019 2025-08-18T00:42:47.299Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477766668832019 2025-08-18T00:42:47.301Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 8c106174-0736-4869-af0c-ab3d4124b481 2025-08-18T00:42:47.302Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477766668832019 2025-08-18T00:42:47.302Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:47.302Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:47.523Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477767298959026 2025-08-18T00:42:47.523Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:42:47.525Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:47.538Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:47.541Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477767298959026 (phase: Validating) 2025-08-18T00:42:47.541Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477767298959026 2025-08-18T00:42:47.544Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 9f7ded46-c28f-407f-9ce6-046aac444313 2025-08-18T00:42:47.544Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:42:47.544Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:42:47.544Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:42:47.544Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:42:47.544Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477767298959026-dest 2025-08-18T00:42:47.544Z WARN migration/migration_validating.go:246 cluster cluster-test-1755477767298959026-dest is already on hub hub2-test-1755477767298959026 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).validateClustersForMigration /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_validating.go:246 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).validating /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_validating.go:131 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_controller.go:160 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:47.545Z INFO migration/migration_validating.go:251 1 clusters verify failed 2025-08-18T00:42:47.545Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ClusterConflict): 1 clusters validate failed, please check the events for details, phase: Failed 2025-08-18T00:42:47.547Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477767298959026 2025-08-18T00:42:47.547Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477767298959026 (phase: Validating) 2025-08-18T00:42:47.547Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477767298959026 2025-08-18T00:42:47.547Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:42:47.547Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:42:47.547Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:42:47.547Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:42:47.547Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477767298959026-dest 2025-08-18T00:42:47.547Z WARN migration/migration_validating.go:246 cluster cluster-test-1755477767298959026-dest is already on hub hub2-test-1755477767298959026 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).validateClustersForMigration /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_validating.go:246 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).validating /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_validating.go:131 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_controller.go:160 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:47.547Z INFO migration/migration_validating.go:251 1 clusters verify failed 2025-08-18T00:42:47.547Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477767298959026 2025-08-18T00:42:47.547Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:47.547Z INFO migration/migration_controller.go:135 no desired managedclustermigration found •2025-08-18T00:42:47.738Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477767298959026 2025-08-18T00:42:47.738Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477767298959026 2025-08-18T00:42:47.741Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 9f7ded46-c28f-407f-9ce6-046aac444313 2025-08-18T00:42:47.741Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477767298959026 2025-08-18T00:42:47.741Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:47.741Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:48.565Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477767738795005 2025-08-18T00:42:48.565Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:42:48.567Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:48.569Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477767738795005 (phase: Validating) 2025-08-18T00:42:48.569Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477767738795005 2025-08-18T00:42:48.571Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: f35a04c1-9a9b-4e23-99b4-dc0c67059b7b 2025-08-18T00:42:48.571Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:42:48.571Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:42:48.571Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:42:48.572Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:42:48.572Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477767738795005 2025-08-18T00:42:48.572Z WARN migration/migration_validating.go:246 cluster cluster-test-1755477767738795005 not found in hub hub1-test-1755477767738795005 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).validateClustersForMigration /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_validating.go:246 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).validating /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_validating.go:131 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_controller.go:160 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:48.572Z INFO migration/migration_validating.go:251 1 clusters verify failed 2025-08-18T00:42:48.572Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ClusterNotFound): 1 clusters validate failed, please check the events for details, phase: Failed 2025-08-18T00:42:48.574Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477767738795005 2025-08-18T00:42:48.574Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:48.575Z INFO migration/migration_controller.go:135 no desired managedclustermigration found •2025-08-18T00:42:48.786Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477767738795005 2025-08-18T00:42:48.786Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477767738795005 2025-08-18T00:42:48.788Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: f35a04c1-9a9b-4e23-99b4-dc0c67059b7b 2025-08-18T00:42:48.789Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477767738795005 2025-08-18T00:42:48.789Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:48.789Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:42:49.203Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477768786155865 2025-08-18T00:42:49.203Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:42:49.205Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:49.207Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477768786155865 (phase: Validating) 2025-08-18T00:42:49.207Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477768786155865 2025-08-18T00:42:49.209Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 793013ad-8b93-4d4e-91ea-7301eb0b6b9c 2025-08-18T00:42:49.209Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:42:49.209Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:42:49.209Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:42:49.209Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:42:49.209Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477768786155865 2025-08-18T00:42:49.209Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:42:49.209Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:42:49.211Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477768786155865 2025-08-18T00:42:49.211Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477768786155865 (phase: Validating) 2025-08-18T00:42:49.211Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477768786155865 2025-08-18T00:42:49.212Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:42:49.212Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:42:49.212Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:42:49.212Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:42:49.212Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477768786155865 2025-08-18T00:42:49.212Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:42:49.212Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477768786155865 2025-08-18T00:42:49.212Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477768786155865 (phase: Initializing) 2025-08-18T00:42:49.212Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477768786155865 2025-08-18T00:42:49.212Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:49.214Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477768786155865/migration-test-1755477768786155865) to be created 2025-08-18T00:42:49.214Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477768786155865/migration-test-1755477768786155865) to be created, phase: Initializing 2025-08-18T00:42:49.216Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477768786155865 2025-08-18T00:42:49.216Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477768786155865 (phase: Initializing) 2025-08-18T00:42:49.216Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477768786155865 2025-08-18T00:42:49.216Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:49.216Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477768786155865/migration-test-1755477768786155865) to be created 2025-08-18T00:42:54.212Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477768786155865 2025-08-18T00:42:54.213Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477768786155865 (phase: Initializing) 2025-08-18T00:42:54.213Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477768786155865 2025-08-18T00:42:54.213Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:54.213Z INFO migration/migration_initializing.go:96 sent initializing event to target hub hub2-test-1755477768786155865 2025-08-18T00:42:54.213Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Error): initializing source hub hub1-test-1755477768786155865 with err :initialization failed, phase: Rollbacking 2025-08-18T00:42:54.216Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:42:54.216Z INFO migration/migration_rollbacking.go:59 sending rollback event to source hub: hub1-test-1755477768786155865 2025-08-18T00:42:54.216Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477768786155865 to complete Initializing stage rollback, phase: Rollbacking 2025-08-18T00:42:54.219Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477768786155865 2025-08-18T00:42:54.219Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477768786155865 (phase: Rollbacking) 2025-08-18T00:42:54.219Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477768786155865 2025-08-18T00:42:54.219Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:42:54.219Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477768786155865 to complete Initializing stage rollback, phase: Rollbacking 2025-08-18T00:42:54.232Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477768786155865 2025-08-18T00:42:54.232Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477768786155865 (phase: Rollbacking) 2025-08-18T00:42:54.232Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477768786155865 2025-08-18T00:42:54.232Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:42:54.331Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477768786155865 2025-08-18T00:42:54.331Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477768786155865 2025-08-18T00:42:54.334Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 793013ad-8b93-4d4e-91ea-7301eb0b6b9c 2025-08-18T00:42:54.334Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477768786155865 2025-08-18T00:42:54.334Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:54.334Z INFO migration/migration_controller.go:135 no desired managedclustermigration found •2025-08-18T00:42:54.950Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477774531777541 2025-08-18T00:42:54.950Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:42:54.953Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:54.966Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:42:54.969Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477774531777541 (phase: Validating) 2025-08-18T00:42:54.969Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477774531777541 2025-08-18T00:42:54.972Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 19e0a4fa-f86a-4e7e-98c5-ea000f7b102e 2025-08-18T00:42:54.972Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:42:54.972Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:42:54.972Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:42:54.972Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:42:54.972Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477774531777541 2025-08-18T00:42:54.972Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:42:54.972Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:42:54.975Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477774531777541 2025-08-18T00:42:54.975Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477774531777541 (phase: Initializing) 2025-08-18T00:42:54.975Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477774531777541 2025-08-18T00:42:54.975Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:54.977Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477774531777541/migration-test-1755477774531777541) to be created 2025-08-18T00:42:54.977Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477774531777541/migration-test-1755477774531777541) to be created, phase: Initializing 2025-08-18T00:42:54.979Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477774531777541 2025-08-18T00:42:54.979Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477774531777541 (phase: Initializing) 2025-08-18T00:42:54.979Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477774531777541 2025-08-18T00:42:54.979Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:54.979Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477774531777541/migration-test-1755477774531777541) to be created 2025-08-18T00:42:54.979Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477774531777541 2025-08-18T00:42:54.979Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477774531777541 (phase: Initializing) 2025-08-18T00:42:54.979Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477774531777541 2025-08-18T00:42:54.979Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:54.979Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477774531777541/migration-test-1755477774531777541) to be created 2025-08-18T00:42:59.220Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477768786155865 2025-08-18T00:42:59.220Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477774531777541 (phase: Initializing) 2025-08-18T00:42:59.220Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477774531777541 2025-08-18T00:42:59.220Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:42:59.220Z INFO migration/migration_initializing.go:111 sent initialing events to source hubs: hub1-test-1755477774531777541 2025-08-18T00:42:59.220Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Error): initializing target hub hub2-test-1755477774531777541 with err :initialization failed, phase: Rollbacking 2025-08-18T00:42:59.223Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:42:59.223Z INFO migration/migration_rollbacking.go:59 sending rollback event to source hub: hub1-test-1755477774531777541 2025-08-18T00:42:59.223Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477774531777541 to complete Initializing stage rollback, phase: Rollbacking 2025-08-18T00:42:59.225Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477774531777541 2025-08-18T00:42:59.225Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477774531777541 (phase: Rollbacking) 2025-08-18T00:42:59.225Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477774531777541 2025-08-18T00:42:59.225Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:42:59.276Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477774531777541 2025-08-18T00:42:59.276Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477774531777541 2025-08-18T00:42:59.279Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 19e0a4fa-f86a-4e7e-98c5-ea000f7b102e 2025-08-18T00:42:59.279Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477774531777541 2025-08-18T00:42:59.279Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:59.279Z INFO migration/migration_controller.go:135 no desired managedclustermigration found •2025-08-18T00:42:59.976Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477774531777541 2025-08-18T00:42:59.976Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:42:59.976Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:43:00.097Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477779477196626 2025-08-18T00:43:00.097Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:43:00.099Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:43:00.112Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:43:00.114Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477779477196626 (phase: Validating) 2025-08-18T00:43:00.114Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477779477196626 2025-08-18T00:43:00.117Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: e20cd441-b443-4f45-8a15-a968f2a11f9e 2025-08-18T00:43:00.117Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:43:00.117Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:43:00.117Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:43:00.117Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:43:00.118Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477779477196626 2025-08-18T00:43:00.118Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:43:00.118Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:43:00.120Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477779477196626 2025-08-18T00:43:00.120Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477779477196626 (phase: Validating) 2025-08-18T00:43:00.120Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477779477196626 2025-08-18T00:43:00.120Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:43:00.120Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:43:00.120Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:43:00.120Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:43:00.120Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477779477196626 2025-08-18T00:43:00.120Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:43:00.120Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477779477196626 2025-08-18T00:43:00.120Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477779477196626 (phase: Initializing) 2025-08-18T00:43:00.120Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477779477196626 2025-08-18T00:43:00.120Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:43:00.122Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477779477196626/migration-test-1755477779477196626) to be created 2025-08-18T00:43:00.122Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477779477196626/migration-test-1755477779477196626) to be created, phase: Initializing 2025-08-18T00:43:00.124Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477779477196626 2025-08-18T00:43:00.124Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477779477196626 (phase: Initializing) 2025-08-18T00:43:00.124Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477779477196626 2025-08-18T00:43:00.124Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:43:00.124Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477779477196626/migration-test-1755477779477196626) to be created 2025-08-18T00:43:00.414Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477779477196626 2025-08-18T00:43:00.414Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477779477196626 2025-08-18T00:43:00.417Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: e20cd441-b443-4f45-8a15-a968f2a11f9e 2025-08-18T00:43:00.417Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477779477196626 2025-08-18T00:43:00.417Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:43:00.417Z INFO migration/migration_controller.go:135 no desired managedclustermigration found •2025-08-18T00:43:01.035Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477780615427320 2025-08-18T00:43:01.035Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:43:01.037Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:43:01.049Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:43:01.051Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477780615427320 (phase: Validating) 2025-08-18T00:43:01.051Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477780615427320 2025-08-18T00:43:01.053Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: b5e17558-4ee2-480e-aed2-4595f78f76fc 2025-08-18T00:43:01.053Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:43:01.053Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:43:01.053Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:43:01.053Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:43:01.054Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477780615427320 2025-08-18T00:43:01.054Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:43:01.054Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:43:01.056Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477780615427320 2025-08-18T00:43:01.056Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477780615427320 (phase: Initializing) 2025-08-18T00:43:01.056Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477780615427320 2025-08-18T00:43:01.056Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:43:01.057Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477780615427320/migration-test-1755477780615427320) to be created 2025-08-18T00:43:01.057Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477780615427320/migration-test-1755477780615427320) to be created, phase: Initializing 2025-08-18T00:43:01.059Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477780615427320 2025-08-18T00:43:01.059Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477780615427320 (phase: Initializing) 2025-08-18T00:43:01.059Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477780615427320 2025-08-18T00:43:01.059Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:43:01.059Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477780615427320/migration-test-1755477780615427320) to be created 2025-08-18T00:43:01.059Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477780615427320 2025-08-18T00:43:01.060Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477780615427320 (phase: Initializing) 2025-08-18T00:43:01.060Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477780615427320 2025-08-18T00:43:01.060Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:43:01.060Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477780615427320/migration-test-1755477780615427320) to be created 2025-08-18T00:43:04.226Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477768786155865 2025-08-18T00:43:04.226Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477780615427320 (phase: Initializing) 2025-08-18T00:43:04.226Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477780615427320 2025-08-18T00:43:04.226Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:43:04.226Z INFO migration/migration_initializing.go:142 migration initializing finished 2025-08-18T00:43:04.226Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(ResourceInitialized): All source and target hubs have been successfully initialized, phase: Deploying 2025-08-18T00:43:04.230Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:43:04.230Z INFO migration/migration_deploying.go:50 migration deploying to source hub: hub1-test-1755477780615427320 2025-08-18T00:43:04.230Z INFO migration/migration_pending.go:101 update condition ResourceDeployed(Waiting): waiting for resources to be prepared in the source hub hub1-test-1755477780615427320, phase: Deploying 2025-08-18T00:43:04.233Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477780615427320 2025-08-18T00:43:04.233Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477780615427320 (phase: Deploying) 2025-08-18T00:43:04.233Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477780615427320 2025-08-18T00:43:04.233Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:43:04.233Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477780615427320 2025-08-18T00:43:04.233Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477780615427320 (phase: Deploying) 2025-08-18T00:43:04.233Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477780615427320 2025-08-18T00:43:04.233Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:43:05.120Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477779477196626 2025-08-18T00:43:05.120Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477780615427320 (phase: Deploying) 2025-08-18T00:43:05.120Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477780615427320 2025-08-18T00:43:05.120Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:43:05.120Z INFO migration/migration_pending.go:101 update condition ResourceDeployed(Error): deploying source hub hub1-test-1755477780615427320 error: deploying failed, phase: Rollbacking 2025-08-18T00:43:05.123Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:43:05.123Z INFO migration/migration_rollbacking.go:59 sending rollback event to source hub: hub1-test-1755477780615427320 2025-08-18T00:43:05.123Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477780615427320 to complete Deploying stage rollback, phase: Rollbacking 2025-08-18T00:43:05.126Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477780615427320 2025-08-18T00:43:05.126Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477780615427320 (phase: Rollbacking) 2025-08-18T00:43:05.126Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477780615427320 2025-08-18T00:43:05.126Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:43:05.126Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477780615427320 2025-08-18T00:43:05.126Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477780615427320 (phase: Rollbacking) 2025-08-18T00:43:05.126Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477780615427320 2025-08-18T00:43:05.126Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:43:05.161Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477780615427320 2025-08-18T00:43:05.161Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477780615427320 2025-08-18T00:43:05.164Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: b5e17558-4ee2-480e-aed2-4595f78f76fc 2025-08-18T00:43:05.164Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477780615427320 2025-08-18T00:43:05.164Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:43:05.164Z INFO migration/migration_controller.go:135 no desired managedclustermigration found •2025-08-18T00:43:05.382Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477785361677042 2025-08-18T00:43:05.382Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:43:05.386Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:43:05.388Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477785361677042 (phase: Validating) 2025-08-18T00:43:05.388Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477785361677042 2025-08-18T00:43:05.390Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 91161cb2-5c17-416d-aa22-4e96b1897959 2025-08-18T00:43:05.390Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:43:05.390Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:43:05.390Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:43:05.390Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:43:05.390Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477785361677042 2025-08-18T00:43:05.390Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:43:05.390Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:43:05.392Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477785361677042 2025-08-18T00:43:05.392Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477785361677042 (phase: Initializing) 2025-08-18T00:43:05.392Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477785361677042 2025-08-18T00:43:05.392Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:43:05.394Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477785361677042/migration-test-1755477785361677042) to be created 2025-08-18T00:43:05.394Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477785361677042/migration-test-1755477785361677042) to be created, phase: Initializing 2025-08-18T00:43:05.396Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477785361677042 2025-08-18T00:43:05.396Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477785361677042 (phase: Initializing) 2025-08-18T00:43:05.396Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477785361677042 2025-08-18T00:43:05.396Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:43:05.396Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477785361677042/migration-test-1755477785361677042) to be created 2025-08-18T00:43:06.057Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477780615427320 2025-08-18T00:43:06.057Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477785361677042 (phase: Initializing) 2025-08-18T00:43:06.057Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477785361677042 2025-08-18T00:43:06.057Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:43:06.057Z INFO migration/migration_initializing.go:142 migration initializing finished 2025-08-18T00:43:06.057Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(ResourceInitialized): All source and target hubs have been successfully initialized, phase: Deploying 2025-08-18T00:43:06.060Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:43:06.060Z INFO migration/migration_deploying.go:50 migration deploying to source hub: hub1-test-1755477785361677042 2025-08-18T00:43:06.060Z INFO migration/migration_pending.go:101 update condition ResourceDeployed(Waiting): waiting for resources to be prepared in the source hub hub1-test-1755477785361677042, phase: Deploying 2025-08-18T00:43:06.063Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477785361677042 2025-08-18T00:43:06.063Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477785361677042 (phase: Deploying) 2025-08-18T00:43:06.063Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477785361677042 2025-08-18T00:43:06.063Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:43:09.233Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477768786155865 2025-08-18T00:43:09.233Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477785361677042 (phase: Deploying) 2025-08-18T00:43:09.233Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477785361677042 2025-08-18T00:43:09.233Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:43:09.233Z INFO migration/migration_pending.go:101 update condition ResourceDeployed(Error): deploying source hub hub1-test-1755477785361677042 error: deploying failed, phase: Rollbacking 2025-08-18T00:43:09.237Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:43:09.237Z INFO migration/migration_rollbacking.go:59 sending rollback event to source hub: hub1-test-1755477785361677042 2025-08-18T00:43:09.237Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477785361677042 to complete Deploying stage rollback, phase: Rollbacking 2025-08-18T00:43:09.250Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477785361677042 to complete Deploying stage rollback, phase: Rollbacking 2025-08-18T00:43:09.252Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477785361677042 2025-08-18T00:43:09.252Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477785361677042 (phase: Rollbacking) 2025-08-18T00:43:09.252Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477785361677042 2025-08-18T00:43:09.252Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:43:09.252Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477785361677042 to complete Deploying stage rollback, phase: Rollbacking 2025-08-18T00:43:09.265Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477785361677042 2025-08-18T00:43:09.265Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477785361677042 (phase: Rollbacking) 2025-08-18T00:43:09.265Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477785361677042 2025-08-18T00:43:09.265Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:43:10.126Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477779477196626 2025-08-18T00:43:10.126Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477785361677042 (phase: Rollbacking) 2025-08-18T00:43:10.126Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477785361677042 2025-08-18T00:43:10.126Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:43:10.127Z INFO migration/migration_rollbacking.go:187 managed service account cleanup will be handled by existing deletion logic for migration migration-test-1755477785361677042 2025-08-18T00:43:10.127Z INFO migration/migration_rollbacking.go:134 managed cluster annotation cleanup will be handled by source hub agents 2025-08-18T00:43:10.127Z INFO migration/migration_rollbacking.go:141 migration rollbacking finished - transitioning to Failed 2025-08-18T00:43:10.127Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(ResourceRolledBack): Deploying rollback completed successfully., phase: Failed 2025-08-18T00:43:10.130Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477785361677042 2025-08-18T00:43:10.130Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:43:10.130Z INFO migration/migration_controller.go:135 no desired managedclustermigration found •2025-08-18T00:43:10.312Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477785361677042 2025-08-18T00:43:10.312Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477785361677042 2025-08-18T00:43:10.315Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 91161cb2-5c17-416d-aa22-4e96b1897959 2025-08-18T00:43:10.315Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477785361677042 2025-08-18T00:43:10.315Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:43:10.315Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:43:10.393Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477785361677042 2025-08-18T00:43:10.393Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:43:10.393Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:43:10.728Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477790312280069 2025-08-18T00:43:10.728Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:43:10.730Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:43:10.743Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:43:10.745Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477790312280069 (phase: Validating) 2025-08-18T00:43:10.745Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477790312280069 2025-08-18T00:43:10.748Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 2c571cb3-1c58-4c53-a237-6e8b10e85001 2025-08-18T00:43:10.748Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:43:10.748Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:43:10.748Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:43:10.748Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:43:10.748Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477790312280069 2025-08-18T00:43:10.748Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:43:10.748Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:43:10.750Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477790312280069 2025-08-18T00:43:10.751Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477790312280069 (phase: Initializing) 2025-08-18T00:43:10.751Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477790312280069 2025-08-18T00:43:10.751Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:43:10.752Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477790312280069/migration-test-1755477790312280069) to be created 2025-08-18T00:43:10.752Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477790312280069/migration-test-1755477790312280069) to be created, phase: Initializing 2025-08-18T00:43:10.754Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477790312280069 2025-08-18T00:43:10.754Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477790312280069 (phase: Initializing) 2025-08-18T00:43:10.754Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477790312280069 2025-08-18T00:43:10.754Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:43:10.754Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477790312280069/migration-test-1755477790312280069) to be created 2025-08-18T00:43:11.064Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477780615427320 2025-08-18T00:43:11.064Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477790312280069 (phase: Initializing) 2025-08-18T00:43:11.064Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477790312280069 2025-08-18T00:43:11.064Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:43:11.065Z INFO migration/migration_initializing.go:142 migration initializing finished 2025-08-18T00:43:11.065Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(ResourceInitialized): All source and target hubs have been successfully initialized, phase: Deploying 2025-08-18T00:43:11.068Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:43:11.068Z INFO migration/migration_deploying.go:92 migration deploying finished 2025-08-18T00:43:11.068Z INFO migration/migration_pending.go:101 update condition ResourceDeployed(ResourcesDeployed): Resources have been successfully deployed to the target hub cluster, phase: Registering 2025-08-18T00:43:11.070Z INFO migration/migration_registering.go:34 migration registering 2025-08-18T00:43:11.070Z INFO migration/migration_registering.go:49 migration registering: hub1-test-1755477790312280069 2025-08-18T00:43:11.070Z INFO migration/migration_pending.go:101 update condition ClusterRegistered(Waiting): waiting for managed clusters to migrating from source hub hub1-test-1755477790312280069, phase: Registering 2025-08-18T00:43:11.073Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477790312280069 2025-08-18T00:43:11.073Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477790312280069 (phase: Registering) 2025-08-18T00:43:11.073Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477790312280069 2025-08-18T00:43:11.073Z INFO migration/migration_registering.go:34 migration registering 2025-08-18T00:43:11.073Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477790312280069 2025-08-18T00:43:11.073Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477790312280069 (phase: Registering) 2025-08-18T00:43:11.073Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477790312280069 2025-08-18T00:43:11.073Z INFO migration/migration_registering.go:34 migration registering 2025-08-18T00:43:14.253Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477768786155865 2025-08-18T00:43:14.253Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477790312280069 (phase: Registering) 2025-08-18T00:43:14.253Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477790312280069 2025-08-18T00:43:14.253Z INFO migration/migration_registering.go:34 migration registering 2025-08-18T00:43:14.253Z INFO migration/migration_pending.go:101 update condition ClusterRegistered(Error): registering to hub hub1-test-1755477790312280069 error: registering failed, phase: Rollbacking 2025-08-18T00:43:14.256Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:43:14.256Z INFO migration/migration_rollbacking.go:59 sending rollback event to source hub: hub1-test-1755477790312280069 2025-08-18T00:43:14.257Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477790312280069 to complete Registering stage rollback, phase: Rollbacking 2025-08-18T00:43:14.259Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477790312280069 2025-08-18T00:43:14.259Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477790312280069 (phase: Rollbacking) 2025-08-18T00:43:14.259Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477790312280069 2025-08-18T00:43:14.259Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:43:14.259Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477790312280069 to complete Registering stage rollback, phase: Rollbacking 2025-08-18T00:43:14.272Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477790312280069 2025-08-18T00:43:14.272Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477790312280069 (phase: Rollbacking) 2025-08-18T00:43:14.272Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477790312280069 2025-08-18T00:43:14.272Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:43:15.751Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477790312280069 2025-08-18T00:43:15.751Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477790312280069 (phase: Rollbacking) 2025-08-18T00:43:15.751Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477790312280069 2025-08-18T00:43:15.751Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:43:15.751Z INFO migration/migration_rollbacking.go:98 sending rollback event to destination hub: hub2-test-1755477790312280069 2025-08-18T00:43:15.752Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for target hub hub2-test-1755477790312280069 to complete Registering stage rollback, phase: Rollbacking 2025-08-18T00:43:15.755Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477790312280069 2025-08-18T00:43:15.755Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477790312280069 (phase: Rollbacking) 2025-08-18T00:43:15.755Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477790312280069 2025-08-18T00:43:15.755Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:43:16.073Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477780615427320 2025-08-18T00:43:16.074Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477790312280069 (phase: Rollbacking) 2025-08-18T00:43:16.074Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477790312280069 2025-08-18T00:43:16.074Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:43:19.260Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477768786155865 2025-08-18T00:43:19.260Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477790312280069 (phase: Rollbacking) 2025-08-18T00:43:19.260Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477790312280069 2025-08-18T00:43:19.260Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:43:19.260Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Timeout): [Timeout] waiting for target hub hub2-test-1755477790312280069 to complete Registering stage rollback., phase: Failed 2025-08-18T00:43:19.264Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477790312280069 2025-08-18T00:43:19.264Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:43:19.264Z INFO migration/migration_controller.go:135 no desired managedclustermigration found •2025-08-18T00:43:19.465Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477790312280069 2025-08-18T00:43:19.465Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477790312280069 2025-08-18T00:43:19.468Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 2c571cb3-1c58-4c53-a237-6e8b10e85001 2025-08-18T00:43:19.468Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477790312280069 2025-08-18T00:43:19.468Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:43:19.468Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:43:20.755Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477790312280069 2025-08-18T00:43:20.755Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:43:20.755Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:43:21.075Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477780615427320 2025-08-18T00:43:21.075Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:43:21.075Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:43:24.264Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477768786155865 2025-08-18T00:43:24.264Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:43:24.264Z INFO migration/migration_controller.go:135 no desired managedclustermigration found waiting for server to shut down...2025-08-18 00:43:41.494 UTC [25186] LOG: received fast shutdown request .2025-08-18 00:43:41.494 UTC [25186] LOG: aborting any active transactions 2025-08-18 00:43:41.495 UTC [25186] LOG: background worker "logical replication launcher" (PID 25192) exited with exit code 1 2025-08-18 00:43:41.495 UTC [25187] LOG: shutting down 2025-08-18 00:43:41.495 UTC [25187] LOG: checkpoint starting: shutdown immediate 2025-08-18 00:43:41.509 UTC [25187] LOG: checkpoint complete: wrote 1021 buffers (6.2%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.011 s, sync=0.003 s, total=0.014 s; sync files=481, longest=0.001 s, average=0.001 s; distance=5360 kB, estimate=5360 kB; lsn=0/1A1A870, redo lsn=0/1A1A870 2025-08-18 00:43:41.515 UTC [25186] LOG: database system is shut down done server stopped Summarizing 1 Failure: [FAIL] Migration Phase Transitions - Simplified [It] should complete full successful migration lifecycle /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/migration/migration_phase_test.go:298 Ran 20 of 20 Specs in 90.918 seconds FAIL! -- 19 Passed | 1 Failed | 0 Pending | 0 Skipped --- FAIL: TestController (90.92s) 2025-08-18T00:43:41.595Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:43:41.595Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables FAIL 2025-08-18T00:43:41.595Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "migration-ctrl", "controllerGroup": "global-hub.open-cluster-management.io", "controllerKind": "ManagedClusterMigration"} 2025-08-18T00:43:41.595Z INFO controller/controller.go:239 All workers finished {"controller": "migration-ctrl", "controllerGroup": "global-hub.open-cluster-management.io", "controllerKind": "ManagedClusterMigration"} FAIL github.com/stolostron/multicluster-global-hub/test/integration/manager/migration 91.016s failed to get CustomResourceDefinition for subscriptionreports.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptionreports.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-yctml9n0:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for subscriptions.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptions.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-yctml9n0:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for policies.policy.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "policies.policy.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-yctml9n0:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope=== RUN TestSpecSyncer Running Suite: Spec Syncer Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/spec ====================================================================================================================== Random Seed: 1755477730 Will run 16 of 16 specs The files belonging to this database system will be owned by user "1002500000". This user must also own the server process. The database cluster will be initialized with locale "C". The default database encoding has accordingly been set to "SQL_ASCII". The default text search configuration will be set to "english". Data page checksums are disabled. creating directory /tmp/tmp/embedded-postgres-go-58194/extracted/data ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... UTC creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok Success. You can now start the database server using: /tmp/tmp/embedded-postgres-go-58194/extracted/bin/pg_ctl -D /tmp/tmp/embedded-postgres-go-58194/extracted/data -l logfile start waiting for server to start....2025-08-18 00:42:25.384 UTC [25219] LOG: starting PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit 2025-08-18 00:42:25.384 UTC [25219] LOG: listening on IPv6 address "::1", port 58194 2025-08-18 00:42:25.384 UTC [25219] LOG: listening on IPv4 address "127.0.0.1", port 58194 2025-08-18 00:42:25.384 UTC [25219] LOG: listening on Unix socket "/tmp/.s.PGSQL.58194" 2025-08-18 00:42:25.386 UTC [25222] LOG: database system was shut down at 2025-08-18 00:42:25 UTC 2025-08-18 00:42:25.389 UTC [25219] LOG: database system is ready to accept connections done server started 2025-08-18T00:42:25.616Z INFO utils/utils.go:71 failed to read file ca-cert-path - open ca-cert-path: no such file or directory script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. script 1.upgrade.sql executed successfully. script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. 2025-08-18T00:42:25.900Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:42:25.901Z INFO spec/dispatcher.go:51 started dispatching received bundles... 2025-08-18T00:42:25.901Z INFO db-to-transport-syncer-policy syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:42:25.901Z INFO db-to-transport-syncer-placementrulebiding syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:42:25.901Z INFO db-to-transport-syncer-application syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:42:25.901Z INFO db-to-transport-syncer-managedclusterlabel syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:42:25.901Z INFO db-to-transport-syncer-channels syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:42:25.901Z INFO db-to-transport-syncer-placementrule syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:42:25.901Z INFO db-to-transport-syncer-subscriptions syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:42:25.901Z INFO db-to-transport-syncer-managedclusterset syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:42:25.901Z INFO db-to-transport-syncer-placements syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:42:25.901Z INFO managed-cluster-labels-syncer syncers/managedcluster_labels_watcher.go:49 initialized watcherspecmanaged_clusters_labelsstatus tablemanaged_clusters 2025-08-18T00:42:25.901Z INFO db-to-transport-syncer-managedclustersetbinding syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:42:25.901Z INFO controller/controller.go:175 Starting EventSource {"controller": "placementrule", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "PlacementRule", "source": "kind source: *v1.PlacementRule"} 2025-08-18T00:42:25.901Z INFO controller/controller.go:183 Starting Controller {"controller": "placementrule", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "PlacementRule"} 2025-08-18T00:42:25.902Z INFO controller/controller.go:175 Starting EventSource {"controller": "placementbinding", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "PlacementBinding", "source": "kind source: *v1.PlacementBinding"} 2025-08-18T00:42:25.902Z INFO controller/controller.go:183 Starting Controller {"controller": "placementbinding", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "PlacementBinding"} 2025-08-18T00:42:25.902Z INFO controller/controller.go:175 Starting EventSource {"controller": "subscription", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Subscription", "source": "kind source: *v1.Subscription"} 2025-08-18T00:42:25.902Z INFO controller/controller.go:183 Starting Controller {"controller": "subscription", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Subscription"} 2025-08-18T00:42:25.902Z INFO controller/controller.go:175 Starting EventSource {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy", "source": "kind source: *v1.Policy"} 2025-08-18T00:42:25.902Z INFO controller/controller.go:183 Starting Controller {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:42:25.902Z INFO controller/controller.go:175 Starting EventSource {"controller": "channel", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Channel", "source": "kind source: *v1.Channel"} 2025-08-18T00:42:25.902Z INFO controller/controller.go:183 Starting Controller {"controller": "channel", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Channel"} 2025-08-18T00:42:25.902Z INFO controller/controller.go:175 Starting EventSource {"controller": "application", "controllerGroup": "app.k8s.io", "controllerKind": "Application", "source": "kind source: *v1beta1.Application"} 2025-08-18T00:42:25.902Z INFO controller/controller.go:183 Starting Controller {"controller": "application", "controllerGroup": "app.k8s.io", "controllerKind": "Application"} 2025-08-18T00:42:25.902Z INFO controller/controller.go:175 Starting EventSource {"controller": "managedclusterset", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSet", "source": "kind source: *v1beta2.ManagedClusterSet"} 2025-08-18T00:42:25.902Z INFO controller/controller.go:183 Starting Controller {"controller": "managedclusterset", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSet"} 2025-08-18T00:42:25.902Z INFO controller/controller.go:175 Starting EventSource {"controller": "managedclustersetbinding", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSetBinding", "source": "kind source: *v1beta2.ManagedClusterSetBinding"} 2025-08-18T00:42:25.902Z INFO controller/controller.go:183 Starting Controller {"controller": "managedclustersetbinding", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSetBinding"} 2025-08-18T00:42:25.902Z INFO controller/controller.go:175 Starting EventSource {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement", "source": "kind source: *v1beta1.Placement"} 2025-08-18T00:42:25.902Z INFO controller/controller.go:183 Starting Controller {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement"} checking postgres... 2025-08-18T00:42:25.916Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "PlacementBindings"} 2025-08-18T00:42:25.916Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "Placements"} 2025-08-18T00:42:25.916Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "ManagedClustersLabels"} 2025-08-18T00:42:25.916Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "ManagedClusterSetBindings"} 2025-08-18T00:42:25.916Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "PlacementRules"} 2025-08-18T00:42:25.916Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "Applications"} 2025-08-18T00:42:25.916Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "Subscriptions"} 2025-08-18T00:42:25.916Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "Channels"} 2025-08-18T00:42:25.916Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "ManagedClusterSets"} 2025-08-18T00:42:25.916Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "Policies"} 2025-08-18T00:42:26.004Z INFO controller/controller.go:217 Starting workers {"controller": "placementrule", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "PlacementRule", "worker count": 1} 2025-08-18T00:42:26.004Z INFO controller/controller.go:217 Starting workers {"controller": "channel", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Channel", "worker count": 1} 2025-08-18T00:42:26.005Z INFO controller/controller.go:217 Starting workers {"controller": "subscription", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Subscription", "worker count": 1} 2025-08-18T00:42:26.005Z INFO controller/controller.go:217 Starting workers {"controller": "placementbinding", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "PlacementBinding", "worker count": 1} 2025-08-18T00:42:26.005Z INFO controller/controller.go:217 Starting workers {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy", "worker count": 1} 2025-08-18T00:42:26.006Z INFO controller/controller.go:217 Starting workers {"controller": "managedclusterset", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSet", "worker count": 1} 2025-08-18T00:42:26.007Z INFO controller/controller.go:217 Starting workers {"controller": "application", "controllerGroup": "app.k8s.io", "controllerKind": "Application", "worker count": 1} 2025-08-18T00:42:26.010Z INFO controller/controller.go:217 Starting workers {"controller": "managedclustersetbinding", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSetBinding", "worker count": 1} 2025-08-18T00:42:26.010Z INFO controller/controller.go:217 Starting workers {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement", "worker count": 1} agent spec sync the resource from manager: ManagedClusterSets agent spec sync the resource from manager: Policies agent spec sync the resource from manager: PlacementRules agent spec sync the resource from manager: ManagedClustersLabels agent spec sync the resource from manager: PlacementBindings agent spec sync the resource from manager: Channels agent spec sync the resource from manager: Applications agent spec sync the resource from manager: ManagedClusterSetBindings agent spec sync the resource from manager: Placements agent spec sync the resource from manager: Subscriptions •2025-08-18T00:42:26.943Z INFO channels-spec-controller controllers/generic.go:128 Adding finalizer {"Request.Namespace": "default", "Request.Name": "ch2"} spec.channels: default - test-channel-1 agent spec sync the resource from manager: ManagedClustersLabels agent spec sync the resource from manager: Channels spec.channels: default - test-channel-1 spec.channels: default - ch2 •2025-08-18T00:42:27.958Z INFO managedclustersetbindings-spec-syncer controllers/generic.go:128 Adding finalizer {"Request.Namespace": "default", "Request.Name": "test-managedclustersetbinding-1"} ••2025-08-18T00:42:28.006Z INFO subscriptions-spec-syncer controllers/generic.go:128 Adding finalizer {"Request.Namespace": "default", "Request.Name": "sub2"} spec.subscriptions: default - test-subscription-1 agent spec sync the resource from manager: ManagedClusterSetBindings agent spec sync the resource from manager: Subscriptions spec.subscriptions: default - test-subscription-1 spec.subscriptions: default - sub2 •2025-08-18T00:42:29.016Z INFO managedclustersets-spec-syncer controllers/generic.go:128 Adding finalizer {"Request.Namespace": "", "Request.Name": "test-managedclusterset-1"} ••2025-08-18T00:42:29.023Z INFO policies-spec-syncer controllers/generic.go:128 Adding finalizer {"Request.Namespace": "default", "Request.Name": "test-policy-1"} •agent spec sync the resource from manager: Policies agent spec sync the resource from manager: ManagedClusterSets •2025-08-18T00:42:30.040Z INFO policies-spec-syncer controllers/generic.go:89 Mismatch between hub and the database, updating the database {"Request.Namespace": "default", "Request.Name": "test-policy-1"} agent spec sync the resource from manager: Policies •2025-08-18T00:42:31.051Z INFO policies-spec-syncer controllers/generic.go:106 Removing an instance from the database {"Request.Namespace": "default", "Request.Name": "test-policy-1"} 2025-08-18T00:42:31.052Z INFO policies-spec-syncer controllers/generic.go:113 Removing finalizer {"Request.Namespace": "default", "Request.Name": "test-policy-1"} 2025-08-18T00:42:31.058Z INFO policies-spec-syncer controllers/generic.go:128 Adding finalizer {"Request.Namespace": "default", "Request.Name": "test-policy-1"} 2025-08-18T00:42:31.060Z INFO controller/controller.go:314 Warning: Reconciler returned both a non-zero result and a non-nil error. The result will always be ignored if the error is non-nil and the non-nil error causes reqeueuing with exponential backoff. For more details, see: https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/reconcile#Reconciler {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy", "Policy": {"name":"test-policy-1","namespace":"default"}, "namespace": "default", "name": "test-policy-1", "reconcileID": "3d31b4ec-e4d4-4937-b808-520c7f387506"} 2025-08-18T00:42:31.060Z ERROR controller/controller.go:316 Reconciler error {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy", "Policy": {"name":"test-policy-1","namespace":"default"}, "namespace": "default", "name": "test-policy-1", "reconcileID": "3d31b4ec-e4d4-4937-b808-520c7f387506", "error": "failed to add finalzier: failed to add a finalizer: Operation cannot be fulfilled on policies.policy.open-cluster-management.io \"test-policy-1\": StorageError: invalid object, Code: 4, Key: /registry/policy.open-cluster-management.io/policies/default/test-policy-1, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 063d197b-c2f0-4c3f-a104-5d898f521c82, UID in object meta: "} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:31.909Z INFO db-to-transport-syncer-policy syncers/generic_syncer.go:76 sync interval has been reset to 2s agent spec sync the resource from manager: Policies 2025-08-18T00:42:31.910Z INFO managed-cluster-labels-syncer syncers/managedcluster_labels_watcher.go:93 trimming interval has been reset to 4s •2025-08-18T00:42:32.064Z INFO placementrules-spec-syncer controllers/generic.go:128 Adding finalizer {"Request.Namespace": "default", "Request.Name": "test-placementrule-1"} ••2025-08-18T00:42:32.078Z INFO placements-spec-syncer controllers/generic.go:128 Adding finalizer {"Request.Namespace": "default", "Request.Name": "test-placement-1"} 2025-08-18T00:42:32.078Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.decisionStrategy" 2025-08-18T00:42:32.078Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.spreadPolicy" 2025-08-18T00:42:32.078Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "status.decisionGroups" •2025-08-18T00:42:32.088Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.decisionStrategy" 2025-08-18T00:42:32.088Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.spreadPolicy" 2025-08-18T00:42:32.088Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "status.decisionGroups" agent spec sync the resource from manager: Placements agent spec sync the resource from manager: PlacementRules •2025-08-18T00:42:33.106Z INFO applications-spec-controller controllers/generic.go:128 Adding finalizer {"Request.Namespace": "default", "Request.Name": "app1"} spec.applications: default - test-application-1 2025-08-18T00:42:33.910Z INFO db-to-transport-syncer-policy syncers/generic_syncer.go:76 sync interval has been reset to 1s agent spec sync the resource from manager: Applications spec.applications: default - test-application-1 spec.applications: default - app1 •2025-08-18T00:42:34.108Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:42:34.108Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:42:34.108Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement"} 2025-08-18T00:42:34.108Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "managedclustersetbinding", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSetBinding"} 2025-08-18T00:42:34.108Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "application", "controllerGroup": "app.k8s.io", "controllerKind": "Application"} 2025-08-18T00:42:34.108Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "managedclusterset", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSet"} 2025-08-18T00:42:34.108Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:42:34.108Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "placementbinding", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "PlacementBinding"} 2025-08-18T00:42:34.108Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "subscription", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Subscription"} 2025-08-18T00:42:34.108Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "channel", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Channel"} 2025-08-18T00:42:34.108Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "placementrule", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "PlacementRule"} 2025-08-18T00:42:34.108Z INFO db-to-transport-syncer-application syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:42:34.108Z INFO db-to-transport-syncer-managedclusterlabel syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:42:34.108Z INFO db-to-transport-syncer-channels syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:42:34.108Z INFO db-to-transport-syncer-placementrulebiding syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:42:34.108Z INFO db-to-transport-syncer-placementrule syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:42:34.108Z INFO db-to-transport-syncer-placements syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:42:34.108Z INFO db-to-transport-syncer-subscriptions syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:42:34.108Z INFO db-to-transport-syncer-managedclustersetbinding syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:42:34.108Z INFO db-to-transport-syncer-managedclusterset syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:42:34.108Z INFO db-to-transport-syncer-policy syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:42:34.108Z INFO managed-cluster-labels-syncer syncers/managedcluster_labels_watcher.go:52 stopped watcherspecmanaged_clusters_labelsstatus tablemanaged_clusters 2025-08-18T00:42:34.108Z INFO spec/dispatcher.go:56 stopped dispatching bundles 2025-08-18T00:42:34.108Z INFO consumer/generic_consumer.go:179 receiver stopped 2025-08-18T00:42:34.108Z INFO controller/controller.go:239 All workers finished {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement"} 2025-08-18T00:42:34.108Z INFO controller/controller.go:239 All workers finished {"controller": "managedclustersetbinding", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSetBinding"} 2025-08-18T00:42:34.108Z INFO controller/controller.go:239 All workers finished {"controller": "managedclusterset", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSet"} 2025-08-18T00:42:34.108Z INFO controller/controller.go:239 All workers finished {"controller": "application", "controllerGroup": "app.k8s.io", "controllerKind": "Application"} 2025-08-18T00:42:34.108Z INFO controller/controller.go:239 All workers finished {"controller": "channel", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Channel"} 2025-08-18T00:42:34.108Z INFO controller/controller.go:239 All workers finished {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:42:34.108Z INFO controller/controller.go:239 All workers finished {"controller": "subscription", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Subscription"} 2025-08-18T00:42:34.108Z INFO controller/controller.go:239 All workers finished {"controller": "placementbinding", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "PlacementBinding"} 2025-08-18T00:42:34.108Z INFO controller/controller.go:239 All workers finished {"controller": "placementrule", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "PlacementRule"} 2025-08-18T00:42:34.108Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:42:34.109024 24377 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1beta2.ManagedClusterSetBinding" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:34.109101 24377 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1beta1.Placement" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:34.109192 24377 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1beta1.Application" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:42:34.109Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:42:34.109Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:42:34.109Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager 2025-08-18 00:42:34.110 UTC [25219] LOG: received fast shutdown request 2025-08-18 00:42:34.110 UTC [25219] LOG: aborting any active transactions 2025-08-18 00:42:34.110 UTC [25452] FATAL: terminating connection due to administrator command 2025-08-18 00:42:34.110 UTC [25237] FATAL: terminating connection due to administrator command 2025-08-18 00:42:34.111 UTC [25219] LOG: background worker "logical replication launcher" (PID 25225) exited with exit code 1 waiting for server to shut down....2025-08-18 00:42:34.116 UTC [25220] LOG: shutting down 2025-08-18 00:42:34.116 UTC [25220] LOG: checkpoint starting: shutdown immediate 2025-08-18 00:42:34.161 UTC [25220] LOG: checkpoint complete: wrote 1049 buffers (6.4%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.040 s, sync=0.006 s, total=0.046 s; sync files=481, longest=0.002 s, average=0.001 s; distance=5327 kB, estimate=5327 kB; lsn=0/1A127B0, redo lsn=0/1A127B0 2025-08-18 00:42:34.177 UTC [25219] LOG: database system is shut down done server stopped Ran 16 of 16 Specs in 24.699 seconds SUCCESS! -- 16 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestSpecSyncer (24.70s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/manager/spec 24.802s failed to get CustomResourceDefinition for subscriptionreports.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptionreports.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-yctml9n0:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for subscriptions.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptions.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-yctml9n0:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for policies.policy.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "policies.policy.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-yctml9n0:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope=== RUN TestDbsyncer Running Suite: Status dbsyncer Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/status ============================================================================================================================ Random Seed: 1755477734 Will run 32 of 32 specs The files belonging to this database system will be owned by user "1002500000". This user must also own the server process. The database cluster will be initialized with locale "C". The default database encoding has accordingly been set to "SQL_ASCII". The default text search configuration will be set to "english". Data page checksums are disabled. creating directory /tmp/tmp/embedded-postgres-go-24878/extracted/data ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... UTC creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok Success. You can now start the database server using: /tmp/tmp/embedded-postgres-go-24878/extracted/bin/pg_ctl -D /tmp/tmp/embedded-postgres-go-24878/extracted/data -l logfile start waiting for server to start....2025-08-18 00:42:26.122 UTC [25256] LOG: starting PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit 2025-08-18 00:42:26.122 UTC [25256] LOG: listening on IPv6 address "::1", port 24878 2025-08-18 00:42:26.122 UTC [25256] LOG: listening on IPv4 address "127.0.0.1", port 24878 2025-08-18 00:42:26.122 UTC [25256] LOG: listening on Unix socket "/tmp/.s.PGSQL.24878" 2025-08-18 00:42:26.124 UTC [25259] LOG: database system was shut down at 2025-08-18 00:42:26 UTC 2025-08-18 00:42:26.126 UTC [25256] LOG: database system is ready to accept connections done server started script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. script 1.upgrade.sql executed successfully. script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. 2025-08-18T00:42:26.520Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:42:26.521Z INFO dispatcher/transport_dispatcher.go:42 transport dispatcher starts dispatching received events... 2025-08-18T00:42:26.521Z INFO dispatcher/conflation_dispatcher.go:64 starting dispatcher 2025-08-18T00:42:26.521Z INFO workerpool/worker_pool.go:36 connection stats {"open connection(worker)": 1, "max": 10} 2025-08-18T00:42:26.521Z INFO workerpool/worker.go:44 started worker {"WorkerID": 10} 2025-08-18T00:42:26.521Z INFO workerpool/worker.go:44 started worker {"WorkerID": 1} 2025-08-18T00:42:26.521Z INFO workerpool/worker.go:44 started worker {"WorkerID": 2} 2025-08-18T00:42:26.521Z INFO workerpool/worker.go:44 started worker {"WorkerID": 3} 2025-08-18T00:42:26.521Z INFO workerpool/worker.go:44 started worker {"WorkerID": 4} 2025-08-18T00:42:26.521Z INFO workerpool/worker.go:44 started worker {"WorkerID": 5} 2025-08-18T00:42:26.521Z INFO workerpool/worker.go:44 started worker {"WorkerID": 6} 2025-08-18T00:42:26.521Z INFO workerpool/worker.go:44 started worker {"WorkerID": 7} 2025-08-18T00:42:26.521Z INFO workerpool/worker.go:44 started worker {"WorkerID": 8} 2025-08-18T00:42:26.521Z INFO workerpool/worker.go:44 started worker {"WorkerID": 9} 2025-08-18T00:42:26.521Z INFO statistics/statistics.go:98 starting statistics Compliance: ID(b8b3e164-377e-4be1-a870-992265f31f7c) hub1/cluster1 unknown 2025-08-18T00:42:26.589Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:26.589Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:26.589Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:26.589Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.subscription.report"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.event.managedcluster"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcompletecompliance"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.minicompliance"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.placement.spec"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.completecompliance"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.managedhub.heartbeat"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:48 registering hybrid element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcompliance"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.event.localrootpolicy"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.placementrule.spec"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.subscription.status"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.managedclustermigration"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.event.localreplicatedpolicy"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.security.alertcounts"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.deltacompliance"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.managedhub.info"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:48 registering hybrid element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.placementrule.localspec"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.compliance"} 2025-08-18T00:42:26.589Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.placementdecision"} 2025-08-18T00:42:26.589Z INFO hub1.complete.policy.compliance conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} Compliance: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster1 compliant Compliance: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster2 non_compliant Compliance: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster4 pending •2025-08-18T00:42:26.663Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:26.663Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:26.663Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:26.663Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:26.663Z INFO hub1.complete.policy.completecompliance conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} Complete(Same): id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster1 compliant Complete(Same): id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster2 non_compliant Complete(Same): id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster4 compliant •2025-08-18T00:42:31.665Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:31.665Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:31.665Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:31.665Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax Complete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster1 compliant Complete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster2 non_compliant Complete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster4 compliant Complete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster1 non_compliant Complete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster2 compliant Complete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster4 pending •S2025-08-18T00:42:31.770Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:31.770Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:31.770Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:31.770Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:31.770Z INFO hub1.complete.policy.minicompliance conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} MinimalCompliance: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1 3 2 •2025-08-18T00:42:31.871Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:31.871Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:31.871Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:31.871Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:31.871Z INFO conflator/element_hybrid.go:52 resetting stream element version {"type": "managedcluster", "version": "0.1"} 2025-08-18T00:42:31.878Z WARN managedcluster/managedcluster_handler.go:158 failed to get cluster info from db: no cluster info found for hub1 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/managedcluster.(*managedClusterHandler).postToInventoryApi /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/managedcluster/managedcluster_handler.go:158 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/managedcluster.(*managedClusterHandler).handleEvent /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/managedcluster/managedcluster_handler.go:135 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:88 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:86 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 ManagedCluster Creating hub1 3f406177-34b2-4852-88dd-ff2809680331 ManagedCluster Creating hub1 3f406177-34b2-4852-88dd-ff2809680332 ManagedCluster Creating hub1 3f406177-34b2-4852-88dd-ff2809680333 •2025-08-18T00:42:31.974Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:31.975Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:31.975Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:31.975Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax ManagedCluster Resync hub1 3f406177-34b2-4852-88dd-ff2809680332 ManagedCluster Resync hub1 3f406177-34b2-4852-88dd-ff2809680333 •2025-08-18T00:42:31.977Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:31.977Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:31.977Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:31.977Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax ManagedCluster Delete [{hub1 3f406177-34b2-4852-88dd-ff2809680332 {"spec": {"hubAcceptsClient": false}, "status": {"version": {}, "conditions": null, "clusterClaims": [{"name": "id.k8s.io", "value": "3f406177-34b2-4852-88dd-ff2809680332"}]}, "metadata": {"uid": "3f406177-34b2-4852-88dd-ff2809680332", "name": "cluster2", "namespace": "cluster2", "creationTimestamp": null}} none 2025-08-18 00:42:31.872386 +0000 +0000 2025-08-18 00:42:31.872386 +0000 +0000 {0001-01-01 00:00:00 +0000 UTC false}} {hub1 3f406177-34b2-4852-88dd-ff2809680333 {"spec": {"hubAcceptsClient": false}, "status": {"version": {}, "conditions": null, "clusterClaims": [{"name": "id.k8s.io", "value": "3f406177-34b2-4852-88dd-ff2809680333"}]}, "metadata": {"uid": "3f406177-34b2-4852-88dd-ff2809680333", "name": "cluster3", "namespace": "cluster3", "creationTimestamp": null}} none 2025-08-18 00:42:31.872386 +0000 +0000 2025-08-18 00:42:31.872386 +0000 +0000 {0001-01-01 00:00:00 +0000 UTC false}}] 2025-08-18T00:42:31.979Z WARN managedcluster/managedcluster_handler.go:158 failed to get cluster info from db: no cluster info found for hub1 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/managedcluster.(*managedClusterHandler).postToInventoryApi /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/managedcluster/managedcluster_handler.go:158 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/managedcluster.(*managedClusterHandler).handleEvent /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/managedcluster/managedcluster_handler.go:135 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:88 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:86 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 ManagedCluster Delete [{hub1 3f406177-34b2-4852-88dd-ff2809680333 {"spec": {"hubAcceptsClient": false}, "status": {"version": {}, "conditions": null, "clusterClaims": [{"name": "id.k8s.io", "value": "3f406177-34b2-4852-88dd-ff2809680333"}]}, "metadata": {"uid": "3f406177-34b2-4852-88dd-ff2809680333", "name": "cluster3", "namespace": "cluster3", "creationTimestamp": null}} none 2025-08-18 00:42:31.872386 +0000 +0000 2025-08-18 00:42:31.872386 +0000 +0000 {0001-01-01 00:00:00 +0000 UTC false}}] •2025-08-18T00:42:32.081Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.081Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.081Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:32.081Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax ManagedCluster Delete [{hub1 3f406177-34b2-4852-88dd-ff2809680333 {"spec": {"hubAcceptsClient": false}, "status": {"version": {}, "conditions": null, "clusterClaims": [{"name": "id.k8s.io", "value": "3f406177-34b2-4852-88dd-ff2809680333"}]}, "metadata": {"uid": "3f406177-34b2-4852-88dd-ff2809680333", "name": "cluster3", "namespace": "cluster3", "creationTimestamp": null}} none 2025-08-18 00:42:31.872386 +0000 +0000 2025-08-18 00:42:31.872386 +0000 +0000 {0001-01-01 00:00:00 +0000 UTC false}}] 2025-08-18T00:42:32.084Z WARN managedcluster/managedcluster_handler.go:158 failed to get cluster info from db: no cluster info found for hub1 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/managedcluster.(*managedClusterHandler).postToInventoryApi /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/managedcluster/managedcluster_handler.go:158 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/managedcluster.(*managedClusterHandler).handleEvent /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/managedcluster/managedcluster_handler.go:135 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:88 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:86 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 ManagedCluster Delete [] •2025-08-18T00:42:32.186Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.186Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.186Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:32.186Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:32.186Z INFO conflator/element_delta.go:50 resetting delta element version {"type": "event.localreplicatedpolicy", "version": "0.1"} LocalPolicyEvent: local-policy-namespace.policy-limitrange.17b0db242743213210 f302ce61-98e7-4d63-8dd2-65951e32fd95 non_compliant •2025-08-18T00:42:32.190Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.190Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.190Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:32.190Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:32.190Z INFO conflator/element_hybrid.go:52 resetting stream element version {"type": "policy.localspec", "version": "0.1"} 2025-08-18T00:42:32.195Z ERROR policy/local_policy_spec_handler.go:220 failed to get cluster info from db - no cluster info found for hub1 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicySpecHandler).postPolicyToInventoryApi /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_policy_spec_handler.go:220 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicySpecHandler).handleEvent /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_policy_spec_handler.go:145 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:88 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:86 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 PolicySpec Creating hub1 496dc811-365e-4c6c-8129-43e31d5dd5fe {"spec": {"disabled": false, "policy-templates": null}, "status": {}, "metadata": {"uid": "496dc811-365e-4c6c-8129-43e31d5dd5fe", "name": "testLocalPolicy1", "namespace": "default", "creationTimestamp": null}} PolicySpec Creating hub1 27f0913c-8bda-4a5f-92f1-0f133e8e8fdc {"spec": {"disabled": false, "policy-templates": null}, "status": {}, "metadata": {"uid": "27f0913c-8bda-4a5f-92f1-0f133e8e8fdc", "name": "testLocalPolicy2", "namespace": "default", "creationTimestamp": null}} •2025-08-18T00:42:32.293Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.293Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.293Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:32.293Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:32.307Z ERROR policy/local_policy_spec_handler.go:220 failed to get cluster info from db - no cluster info found for hub1 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicySpecHandler).postPolicyToInventoryApi /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_policy_spec_handler.go:220 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicySpecHandler).handleEvent /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_policy_spec_handler.go:145 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:88 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:86 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 •2025-08-18T00:42:32.396Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.396Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.396Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:32.396Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:32.398Z ERROR policy/local_policy_spec_handler.go:220 failed to get cluster info from db - no cluster info found for hub1 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicySpecHandler).postPolicyToInventoryApi /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_policy_spec_handler.go:220 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicySpecHandler).handleEvent /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_policy_spec_handler.go:145 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:88 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:86 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 •2025-08-18T00:42:32.399Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.399Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.399Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:32.399Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:32.401Z ERROR policy/local_policy_spec_handler.go:220 failed to get cluster info from db - no cluster info found for hub1 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicySpecHandler).postPolicyToInventoryApi /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_policy_spec_handler.go:220 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicySpecHandler).handleEvent /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_policy_spec_handler.go:145 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:88 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:86 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 •2025-08-18T00:42:32.402Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.402Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.402Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:32.402Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:32.402Z INFO hub1.complete.placementrule.localspec conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} hub1 f47ac10b-58cc-4372-a567-0e02b2c3d479 {"spec": {"schedulerName": "global-hub"}, "status": {}, "metadata": {"uid": "f47ac10b-58cc-4372-a567-0e02b2c3d479", "name": "test-placementrule-1", "namespace": "default", "creationTimestamp": null}} •2025-08-18T00:42:32.404Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.404Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.404Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:32.404Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:32.404Z INFO hub1.complete.placementrule.spec conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} PlacementRule: hub1 testPlacementRule •2025-08-18T00:42:32.406Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.406Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.406Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:32.406Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:32.406Z INFO conflator/element_delta.go:50 resetting delta element version {"type": "event.localrootpolicy", "version": "0.1"} hub1 policy-limitrange.17b8363660d39188 Policy local-policy-namespace/policy-limitrange was propagated to cluster kind-hub2-cluster1/kind-hub2-cluster1 •2025-08-18T00:42:32.409Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.409Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.409Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:32.409Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:32.409Z INFO hub1.complete.subscription.report conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} SubscriptionReport: hub1 testAppReport •2025-08-18T00:42:32.411Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.411Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.411Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:32.411Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:32.411Z INFO hub1.complete.subscription.status conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} SubscriptionReport: hub1 testAppSbu •2025-08-18T00:42:32.413Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.413Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.413Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:32.413Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:32.413Z INFO hub1.complete.placementdecision conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} PlacementDecision: hub1 testPlacementDecision •2025-08-18T00:42:32.415Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.415Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.415Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:32.415Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:32.415Z INFO hub1.complete.placement.spec conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} Placement: hub1 testPlacements •2025-08-18T00:42:32.419Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.419Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.419Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:32.419Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:32.419Z INFO hub1.complete.policy.localcompliance conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} 2025-08-18T00:42:32.419Z INFO policy.localcompliance policy/local_compliance_handler.go:61 handler start type io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcomplianceLH hub1version 0.1 LocalCompliance: ID(b8b3e164-377e-4be1-a870-992265f31f7c) hub1/cluster1 unknown LocalCompliance: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster1 compliant LocalCompliance: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster2 non_compliant LocalCompliance: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster4 pending LocalCompliance: expiredCount 1 LocalCompliance: addedCount 3 2025-08-18T00:42:32.430Z WARN policy.localcompliance policy/local_compliance_handler.go:224 failed to get cluster info from db - no cluster info found for hub1 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.syncInventory /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_compliance_handler.go:224 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicyComplianceHandler).handleCompliance /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_compliance_handler.go:148 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicyComplianceHandler).handleEventWrapper /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_compliance_handler.go:55 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).fullBundleHandle.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:121 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).fullBundleHandle /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:119 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:76 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 2025-08-18T00:42:32.430Z INFO policy.localcompliance policy/local_compliance_handler.go:183 handler finishedtypeio.open-cluster-management.operator.multiclusterglobalhubs.policy.localcomplianceLHhub1version0.1 LocalCompliance: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster1 compliant LocalCompliance: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster2 non_compliant LocalCompliance: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster4 pending LocalCompliance: expiredCount 0 LocalCompliance: addedCount 3 •2025-08-18T00:42:32.532Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.532Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:32.532Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:32.532Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:32.532Z INFO policy.localcompliance policy/local_compliance_handler.go:61 handler start type io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcomplianceLH hub1version 1.2 2025-08-18T00:42:32.536Z WARN policy.localcompliance policy/local_compliance_handler.go:224 failed to get cluster info from db - no cluster info found for hub1 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.syncInventory /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_compliance_handler.go:224 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicyComplianceHandler).handleCompliance /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_compliance_handler.go:148 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicyComplianceHandler).handleEventWrapper /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_compliance_handler.go:55 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).fullBundleHandle.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:121 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).fullBundleHandle /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:119 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:76 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 2025-08-18T00:42:32.536Z INFO policy.localcompliance policy/local_compliance_handler.go:183 handler finishedtypeio.open-cluster-management.operator.multiclusterglobalhubs.policy.localcomplianceLHhub1version1.2 LocalCompliance Resync: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster1 compliant LocalCompliance Resync: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster2 non_compliant LocalCompliance Resync: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster5 pending •2025-08-18T00:42:37.533Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:37.533Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:37.533Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:37.533Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:37.533Z INFO hub1.complete.policy.localcompletecompliance conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} 2025-08-18T00:42:37.536Z WARN policy.localcompletecompliance policy/local_compliance_handler.go:224 failed to get cluster info from db - no cluster info found for hub1 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.syncInventory /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_compliance_handler.go:224 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicyCompleteHandler).handleCompleteCompliance /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_complete_handler.go:169 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicyCompleteHandler).handleEventWrapper /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_complete_handler.go:56 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).fullBundleHandle.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:121 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).fullBundleHandle /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:119 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:76 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster1 compliant LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster2 non_compliant LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster5 compliant •2025-08-18T00:42:42.535Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:42.535Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:42.535Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:42.535Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:42.535Z INFO hub1.complete.policy.localcompletecompliance conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster1 compliant LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster2 non_compliant LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster5 compliant 2025-08-18T00:42:42.540Z WARN policy.localcompletecompliance policy/local_compliance_handler.go:224 failed to get cluster info from db - no cluster info found for hub1 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.syncInventory /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_compliance_handler.go:224 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicyCompleteHandler).handleCompleteCompliance /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_complete_handler.go:169 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicyCompleteHandler).handleEventWrapper /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_complete_handler.go:56 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).fullBundleHandle.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:121 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).fullBundleHandle /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:119 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:76 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster1 non_compliant LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster5 pending LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster2 compliant •2025-08-18T00:42:42.637Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:42.637Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:42.637Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:42.637Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:42.637Z INFO hub1.complete.managedhub.info conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} hub1 00000000-0000-0000-0000-000000000001 {"clusterId": "00000000-0000-0000-0000-000000000001", "consoleURL": "console-openshift-console.apps.test-cluster", "grafanaURL": "", "mchVersion": ""} •2025-08-18T00:42:42.641Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:42.641Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:42.641Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:42.641Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:42.641Z INFO hub1.complete.security.alertcounts conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} 2025/08/18 00:42:42 /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/status/security_alert_counts_handler_test.go:63 record not found [1.640ms] [rows:0] SELECT * FROM "security"."alert_counts" ORDER BY "alert_counts"."hub_name" LIMIT 1 •2025-08-18T00:42:42.805Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:42.805Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:42.805Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:42.805Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:42.816Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:42.816Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:42.816Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:42.816Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025/08/18 00:42:42 /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/status/security_alert_counts_handler_test.go:131 record not found [0.527ms] [rows:0] SELECT * FROM "security"."alert_counts" WHERE "alert_counts"."hub_name" = 'hub1' AND "alert_counts"."source" = 'other-namespace/other-name' ORDER BY "alert_counts"."hub_name" LIMIT 1 •2025-08-18T00:42:42.964Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:42.964Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:42.964Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:42.964Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax •2025-08-18T00:42:42.974Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:42.974Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:42.974Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:42.974Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:42.974Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.event.localreplicatedpolicy"} 2025-08-18T00:42:42.974Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.security.alertcounts"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.deltacompliance"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.managedhub.info"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:48 registering hybrid element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.placementrule.localspec"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.compliance"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.placementdecision"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.subscription.report"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.event.managedcluster"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcompletecompliance"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.minicompliance"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.placement.spec"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.completecompliance"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.managedhub.heartbeat"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:48 registering hybrid element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcompliance"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.event.localrootpolicy"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.placementrule.spec"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.subscription.status"} 2025-08-18T00:42:42.975Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.managedclustermigration"} 2025-08-18T00:42:42.975Z INFO conflator/element_delta.go:50 resetting delta element version {"type": "event.managedcluster", "version": "0.1"} >> cluster-event-cluster1 13b2e003-2bdf-4c82-9bdf-f1aa7ccf608d managed-cluster1.17cd5c3642c43a8a 2025-08-18 00:42:42.974565 +0000 +0000 •>> cluster-event-cluster1 13b2e003-2bdf-4c82-9bdf-f1aa7ccf607c managed-cluster1.17cd5c3642c43a8a 2025-08-18 00:42:42.974565 +0000 +0000 •2025-08-18T00:42:43.082Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:43.082Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:42:43.082Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:42:43.082Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:42:43.082Z INFO hub1.complete.managedhub.heartbeat conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} hub1 2025-08-18 00:42:43.08269 +0000 +0000 active •2025-08-18T00:42:43.184Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:42:43.184Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:42:43.184Z INFO conflator/conflation_committer.go:55 context canceled, exiting committer... 2025-08-18T00:42:43.184Z INFO dispatcher/conflation_dispatcher.go:69 stopped dispatcher 2025-08-18T00:42:43.184Z INFO consumer/generic_consumer.go:179 receiver stopped 2025-08-18T00:42:43.184Z INFO statistics/statistics.go:108 stopped statistics 2025-08-18T00:42:43.184Z INFO dispatcher/transport_dispatcher.go:47 stopped dispatching events 2025-08-18T00:42:43.184Z INFO manager/internal.go:550 Stopping and waiting for caches 2025-08-18T00:42:43.184Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:42:43.184Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:42:43.184Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager waiting for server to shut down....2025-08-18 00:42:43.185 UTC [25256] LOG: received fast shutdown request 2025-08-18 00:42:43.185 UTC [25256] LOG: aborting any active transactions 2025-08-18 00:42:43.186 UTC [25522] FATAL: terminating connection due to administrator command 2025-08-18 00:42:43.186 UTC [25521] FATAL: terminating connection due to administrator command 2025-08-18 00:42:43.186 UTC [25520] FATAL: terminating connection due to administrator command 2025-08-18 00:42:43.186 UTC [25519] FATAL: terminating connection due to administrator command 2025-08-18 00:42:43.186 UTC [25518] FATAL: terminating connection due to administrator command 2025-08-18 00:42:43.186 UTC [25264] FATAL: terminating connection due to administrator command 2025-08-18 00:42:43.189 UTC [25266] FATAL: terminating connection due to administrator command 2025-08-18 00:42:43.191 UTC [25256] LOG: background worker "logical replication launcher" (PID 25262) exited with exit code 1 2025-08-18 00:42:43.191 UTC [25474] FATAL: terminating connection due to administrator command 2025-08-18 00:42:43.193 UTC [25257] LOG: shutting down 2025-08-18 00:42:43.193 UTC [25257] LOG: checkpoint starting: shutdown immediate 2025-08-18 00:42:43.209 UTC [25257] LOG: checkpoint complete: wrote 1094 buffers (6.7%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.014 s, sync=0.003 s, total=0.017 s; sync files=493, longest=0.001 s, average=0.001 s; distance=5343 kB, estimate=5343 kB; lsn=0/1A16810, redo lsn=0/1A16810 2025-08-18 00:42:43.219 UTC [25256] LOG: database system is shut down done server stopped Ran 31 of 32 Specs in 30.108 seconds SUCCESS! -- 31 Passed | 0 Failed | 0 Pending | 1 Skipped --- PASS: TestDbsyncer (30.11s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/manager/status 30.165s === RUN TestControllers Running Suite: Controller Integration Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/webhook ==================================================================================================================================== Random Seed: 1755477734 Will run 4 of 4 specs 2025-08-18T00:42:24.432Z INFO controller-runtime.webhook webhook/server.go:183 Registering webhook {"path": "/mutating"} 2025-08-18T00:42:24.433Z INFO controller-runtime.webhook webhook/server.go:191 Starting webhook server 2025-08-18T00:42:24.433Z INFO controller-runtime.certwatcher certwatcher/certwatcher.go:161 Updated current TLS certificate 2025-08-18T00:42:24.433Z INFO controller-runtime.webhook webhook/server.go:242 Serving webhook server {"host": "127.0.0.1", "port": 43837} 2025-08-18T00:42:24.433Z INFO controller-runtime.certwatcher certwatcher/certwatcher.go:115 Starting certificate watcher 2025-08-18T00:42:26.600Z INFO webhook/admission_handler.go:34 admission webhook is called, name:, namespace:default, kind:Placement, operation:CREATE 2025-08-18T00:42:26.613Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.decisionStrategy" 2025-08-18T00:42:26.636Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.spreadPolicy" 2025-08-18T00:42:26.636Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "status.decisionGroups" •2025-08-18T00:42:26.643Z INFO webhook/admission_handler.go:34 admission webhook is called, name:, namespace:default, kind:Placement, operation:CREATE •2025-08-18T00:42:26.659Z INFO webhook/admission_handler.go:34 admission webhook is called, name:, namespace:default, kind:PlacementRule, operation:CREATE •2025-08-18T00:42:26.667Z INFO webhook/admission_handler.go:34 admission webhook is called, name:, namespace:default, kind:PlacementRule, operation:CREATE •2025-08-18T00:42:26.671Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:42:26.672Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:42:26.672Z INFO manager/internal.go:550 Stopping and waiting for caches 2025-08-18T00:42:26.672Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:42:26.672Z INFO controller-runtime.webhook webhook/server.go:249 Shutting down webhook server with timeout of 1 minute 2025-08-18T00:42:26.676Z ERROR controller-runtime.certwatcher certwatcher/certwatcher.go:185 error re-watching file {"error": "no such file or directory"} sigs.k8s.io/controller-runtime/pkg/certwatcher.(*CertWatcher).handleEvent /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/certwatcher/certwatcher.go:185 sigs.k8s.io/controller-runtime/pkg/certwatcher.(*CertWatcher).Watch /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/certwatcher/certwatcher.go:133 2025-08-18T00:42:26.676Z ERROR controller-runtime.certwatcher certwatcher/certwatcher.go:190 error re-reading certificate {"error": "open /tmp/envtest-serving-certs-3822624094/tls.crt: no such file or directory"} sigs.k8s.io/controller-runtime/pkg/certwatcher.(*CertWatcher).handleEvent /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/certwatcher/certwatcher.go:190 sigs.k8s.io/controller-runtime/pkg/certwatcher.(*CertWatcher).Watch /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/certwatcher/certwatcher.go:133 2025-08-18T00:42:27.737Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:42:27.745Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager Ran 4 of 4 Specs in 13.690 seconds SUCCESS! -- 4 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestControllers (13.69s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/manager/webhook 13.727s === RUN TestControllers Running Suite: Controller Integration Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator ============================================================================================================================= Random Seed: 1755477734 Will run 2 of 2 specs The files belonging to this database system will be owned by user "1002500000". This user must also own the server process. The database cluster will be initialized with locale "C". The default database encoding has accordingly been set to "SQL_ASCII". The default text search configuration will be set to "english". Data page checksums are disabled. creating directory /tmp/tmp/embedded-postgres-go-62776/extracted/data ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... UTC creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok Success. You can now start the database server using: /tmp/tmp/embedded-postgres-go-62776/extracted/bin/pg_ctl -D /tmp/tmp/embedded-postgres-go-62776/extracted/data -l logfile start waiting for server to start....2025-08-18 00:42:28.061 UTC [25400] LOG: starting PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit 2025-08-18 00:42:28.062 UTC [25400] LOG: listening on IPv6 address "::1", port 62776 2025-08-18 00:42:28.062 UTC [25400] LOG: listening on IPv4 address "127.0.0.1", port 62776 2025-08-18 00:42:28.062 UTC [25400] LOG: listening on Unix socket "/tmp/.s.PGSQL.62776" 2025-08-18 00:42:28.064 UTC [25409] LOG: database system was shut down at 2025-08-18 00:42:27 UTC 2025-08-18 00:42:28.091 UTC [25400] LOG: database system is ready to accept connections done server started I0818 00:42:28.198108 24771 leaderelection.go:257] attempting to acquire leader lease default/549a8919.open-cluster-management.io... I0818 00:42:28.204532 24771 leaderelection.go:271] successfully acquired lease default/549a8919.open-cluster-management.io 2025-08-18T00:42:28.204Z INFO controller/controller.go:175 Starting EventSource {"controller": "MetaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:42:28.204Z INFO controller/controller.go:175 Starting EventSource {"controller": "MetaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha1.MulticlusterGlobalHubAgent"} 2025-08-18T00:42:28.204Z INFO controller/controller.go:183 Starting Controller {"controller": "MetaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.315Z INFO controller/controller.go:217 Starting workers {"controller": "MetaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:42:28.430Z INFO storage/storage_reconciler.go:101 start storage controller 2025-08-18T00:42:28.430Z INFO transporter/transport_reconciler.go:57 start transport controller 2025-08-18T00:42:28.430Z INFO controller/controller.go:183 Starting Controller {"controller": "transport", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.430Z INFO controller/controller.go:217 Starting workers {"controller": "transport", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:42:28.430Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:42:28.430Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Secret"} 2025-08-18T00:42:28.430Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:42:28.430Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.StatefulSet"} 2025-08-18T00:42:28.430Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ServiceAccount"} 2025-08-18T00:42:28.430Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.PrometheusRule"} 2025-08-18T00:42:28.430Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ServiceMonitor"} 2025-08-18T00:42:28.430Z INFO controller/controller.go:183 Starting Controller {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.430Z INFO controller/controller.go:132 Starting EventSource {"controller": "transport", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:42:28.430Z INFO controller/controller.go:132 Starting EventSource {"controller": "transport", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Secret"} 2025-08-18T00:42:28.430Z INFO transporter/transport_reconciler.go:65 inited transport controller 2025-08-18T00:42:28.430Z INFO managedhub/managedhub_controller.go:64 start managedhub controller 2025-08-18T00:42:28.430Z INFO managedhub/managedhub_controller.go:72 inited managedhub controller 2025-08-18T00:42:28.430Z INFO acm/resources.go:96 start acm controller 2025-08-18T00:42:28.430Z INFO controller/controller.go:183 Starting Controller {"controller": "acm-controller"} 2025-08-18T00:42:28.430Z INFO controller/controller.go:217 Starting workers {"controller": "acm-controller", "worker count": 1} 2025-08-18T00:42:28.431Z INFO controller/controller.go:175 Starting EventSource {"controller": "ManagedHubController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:42:28.431Z INFO controller/controller.go:175 Starting EventSource {"controller": "ManagedHubController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ManagedCluster"} 2025-08-18T00:42:28.431Z INFO controller/controller.go:183 Starting Controller {"controller": "ManagedHubController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.431Z INFO controller/controller.go:132 Starting EventSource {"controller": "acm-controller", "source": "kind source: *v1.PartialObjectMetadata"} 2025-08-18T00:42:28.431Z INFO acm/resources.go:122 inited acm controller 2025-08-18T00:42:28.431Z INFO manager/manager_reconciler.go:100 start manager controller 2025-08-18T00:42:28.431Z INFO addon/default_agent_controller.go:71 start default agent controller 2025-08-18T00:42:28.431Z INFO storage/postgres_user_reconciler.go:59 start postgres users controller 2025-08-18T00:42:28.431Z INFO addon/addon_manager.go:66 start addon manager controller 2025-08-18T00:42:28.431Z INFO webhook/webhook_controller.go:63 start webhook controller 2025-08-18T00:42:28.431Z INFO controller/controller.go:183 Starting Controller {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.431Z INFO controller/controller.go:217 Starting workers {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:42:28.431Z INFO controller/controller.go:175 Starting EventSource {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:42:28.431Z INFO controller/controller.go:175 Starting EventSource {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap", "source": "kind source: *v1.Secret"} 2025-08-18T00:42:28.431Z INFO controller/controller.go:183 Starting Controller {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:42:28.431Z INFO controller/controller.go:132 Starting EventSource {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:42:28.431Z INFO controller/controller.go:132 Starting EventSource {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha1.AddOnDeploymentConfig"} 2025-08-18T00:42:28.431Z INFO controller/controller.go:132 Starting EventSource {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1beta2.ManagedClusterSetBinding"} 2025-08-18T00:42:28.431Z INFO controller/controller.go:132 Starting EventSource {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.MutatingWebhookConfiguration"} 2025-08-18T00:42:28.431Z INFO controller/controller.go:132 Starting EventSource {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1beta1.Placement"} 2025-08-18T00:42:28.431Z INFO webhook/webhook_controller.go:73 inited webhook controller 2025-08-18T00:42:28.431Z INFO mceaddons/mce_addons_controller.go:60 start mce addons controller 2025-08-18T00:42:28.431Z INFO agent/local_agent_controller.go:48 start local agent controller 2025-08-18T00:42:28.431Z INFO backup/backup_start.go:78 start backup controller 2025-08-18T00:42:28.431Z INFO backup/backup_start.go:90 inited backup controller 2025-08-18T00:42:28.431Z INFO controller/controller.go:175 Starting EventSource {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:42:28.431Z INFO controller/controller.go:175 Starting EventSource {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Secret"} 2025-08-18T00:42:28.431Z INFO controller/controller.go:175 Starting EventSource {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:42:28.431Z INFO controller/controller.go:175 Starting EventSource {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.PersistentVolumeClaim"} 2025-08-18T00:42:28.431Z INFO controller/controller.go:175 Starting EventSource {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.MultiClusterHub"} 2025-08-18T00:42:28.431Z INFO controller/controller.go:183 Starting Controller {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.461Z INFO agent/local_agent_controller.go:48 start local agent controller 2025-08-18T00:42:28.461Z INFO manager/manager_reconciler.go:100 start manager controller 2025-08-18T00:42:28.461Z INFO addon/default_agent_controller.go:71 start default agent controller 2025-08-18T00:42:28.461Z INFO addon/addon_manager.go:66 start addon manager controller 2025-08-18T00:42:28.461Z INFO mceaddons/mce_addons_controller.go:60 start mce addons controller •2025-08-18T00:42:28.489Z INFO KubeAPIWarningLogger log/warning_handler.go:65 metadata.finalizers: "fz": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers 2025-08-18T00:42:28.528Z INFO transporter/transport_reconciler.go:49 TransportController resource removed: true 2025-08-18T00:42:28.528Z INFO managedhub/managedhub_controller.go:53 managedHubController resource removed: true 2025-08-18T00:42:28.528Z INFO webhook/webhook_controller.go:78 webhookController resource removed: false 2025-08-18T00:42:28.528Z INFO transporter/transport_reconciler.go:49 TransportController resource removed: true 2025-08-18T00:42:28.528Z INFO managedhub/managedhub_controller.go:53 managedHubController resource removed: true 2025-08-18T00:42:28.528Z INFO webhook/webhook_controller.go:78 webhookController resource removed: false 2025-08-18T00:42:28.572Z INFO controller/controller.go:217 Starting workers {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap", "worker count": 1} 2025-08-18T00:42:28.573Z INFO transporter/transport_reconciler.go:49 TransportController resource removed: true 2025-08-18T00:42:28.573Z INFO managedhub/managedhub_controller.go:53 managedHubController resource removed: true 2025-08-18T00:42:28.573Z INFO webhook/webhook_controller.go:78 webhookController resource removed: false 2025-08-18T00:42:28.573Z INFO webhook/webhook_controller.go:78 webhookController resource removed: false 2025-08-18T00:42:28.633Z INFO protocol/strimzi_kafka_controller.go:58 KafkaController resource removed: false 2025-08-18T00:42:28.633Z INFO transporter/transport_reconciler.go:136 Wait kafka resource removed 2025-08-18T00:42:28.646Z INFO controller/controller.go:217 Starting workers {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:42:28.660Z INFO controller/controller.go:217 Starting workers {"controller": "ManagedHubController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:42:28.660Z INFO controller/controller.go:217 Starting workers {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:42:28.671Z ERROR controller/controller.go:316 Reconciler error {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-pbxlfp"}, "namespace": "namespace-pbxlfp", "name": "test-mgh", "reconcileID": "ef143039-85c1-406d-a49b-f52a1218b54e", "error": "MulticlusterGlobalHub.operator.open-cluster-management.io \"test-mgh\" not found"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:42:28.779Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:42:28.779Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:42:28.779Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.779Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "ManagedHubController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.779Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.779Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:42:28.779Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.779Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "acm-controller"} 2025-08-18T00:42:28.779Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "transport", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.779Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "MetaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.779Z INFO controller/controller.go:239 All workers finished {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.779Z INFO controller/controller.go:239 All workers finished {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:42:28.779Z INFO controller/controller.go:239 All workers finished {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.779Z INFO controller/controller.go:239 All workers finished {"controller": "ManagedHubController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.779Z INFO controller/controller.go:239 All workers finished {"controller": "acm-controller"} 2025-08-18T00:42:28.779Z INFO controller/controller.go:239 All workers finished {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.779Z INFO controller/controller.go:239 All workers finished {"controller": "transport", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.779Z INFO controller/controller.go:239 All workers finished {"controller": "MetaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:28.779Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:42:28.779938 24771 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:28.780029 24771 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1alpha1.Subscription" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:28.780083 24771 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1beta2.ManagedClusterSetBinding" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:28.780161 24771 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1alpha1.AddOnDeploymentConfig" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:28.780228 24771 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ManagedCluster" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:28.780267 24771 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.MutatingWebhookConfiguration" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:28.780327 24771 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.PrometheusRule" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:28.780373 24771 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1beta1.Placement" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:28.780413 24771 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.MultiClusterHub" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:28.780462 24771 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ServiceMonitor" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:28.780515 24771 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.PartialObjectMetadata" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:28.780565 24771 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ServiceAccount" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:28.780606 24771 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.StatefulSet" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:28.780710 24771 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.PersistentVolumeClaim" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:28.780774 24771 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:28.780840 24771 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.Secret" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:42:28.780Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:42:28.780Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:42:28.780Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager 2025-08-18T00:42:28.781Z ERROR manager/internal.go:512 error received after stop sequence was engaged {"error": "leader election lost"} sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/manager/internal.go:512 2025-08-18 00:42:28.782 UTC [25400] LOG: received fast shutdown request 2025-08-18 00:42:28.782 UTC [25400] LOG: aborting any active transactions 2025-08-18 00:42:28.782 UTC [25422] FATAL: terminating connection due to administrator command 2025-08-18 00:42:28.784 UTC [25400] LOG: background worker "logical replication launcher" (PID 25413) exited with exit code 1 waiting for server to shut down....2025-08-18 00:42:28.785 UTC [25407] LOG: shutting down 2025-08-18 00:42:28.785 UTC [25407] LOG: checkpoint starting: shutdown immediate 2025-08-18 00:42:28.802 UTC [25407] LOG: checkpoint complete: wrote 919 buffers (5.6%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.017 s, sync=0.001 s, total=0.018 s; sync files=301, longest=0.001 s, average=0.001 s; distance=4231 kB, estimate=4231 kB; lsn=0/1900648, redo lsn=0/1900648 2025-08-18 00:42:28.846 UTC [25400] LOG: database system is shut down done server stopped Ran 2 of 2 Specs in 15.814 seconds SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestControllers (15.81s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/operator 15.883s === RUN TestControllers Running Suite: Controller Integration Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers ========================================================================================================================================= Random Seed: 1755477734 Will run 15 of 15 specs The files belonging to this database system will be owned by user "1002500000". This user must also own the server process. The database cluster will be initialized with locale "C". The default database encoding has accordingly been set to "SQL_ASCII". The default text search configuration will be set to "english". Data page checksums are disabled. creating directory /tmp/tmp/embedded-postgres-go-19428/extracted/data ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... UTC creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok Success. You can now start the database server using: /tmp/tmp/embedded-postgres-go-19428/extracted/bin/pg_ctl -D /tmp/tmp/embedded-postgres-go-19428/extracted/data -l logfile start waiting for server to start....2025-08-18 00:42:27.289 UTC [25309] LOG: starting PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit 2025-08-18 00:42:27.289 UTC [25309] LOG: listening on IPv6 address "::1", port 19428 2025-08-18 00:42:27.289 UTC [25309] LOG: listening on IPv4 address "127.0.0.1", port 19428 2025-08-18 00:42:27.290 UTC [25309] LOG: listening on Unix socket "/tmp/.s.PGSQL.19428" 2025-08-18 00:42:27.294 UTC [25312] LOG: database system was shut down at 2025-08-18 00:42:27 UTC 2025-08-18 00:42:27.297 UTC [25309] LOG: database system is ready to accept connections done server started I0818 00:42:27.830857 24772 leaderelection.go:257] attempting to acquire leader lease default/549a8919.open-cluster-management.io... I0818 00:42:27.845395 24772 leaderelection.go:271] successfully acquired lease default/549a8919.open-cluster-management.io 2025-08-18T00:42:27.873Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:42:27.874Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Secret"} 2025-08-18T00:42:27.874Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:42:27.879Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Deployment"} 2025-08-18T00:42:27.879Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Service"} 2025-08-18T00:42:27.879Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ServiceAccount"} 2025-08-18T00:42:27.879Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ClusterRole"} 2025-08-18T00:42:27.879Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ClusterRoleBinding"} 2025-08-18T00:42:27.879Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Route"} 2025-08-18T00:42:27.879Z INFO controller/controller.go:183 Starting Controller {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:27.995Z INFO controller/controller.go:217 Starting workers {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:42:28.397Z INFO utils/utils.go:163 creating configmap, namespace: namespace-z95mlk, name: multicluster-global-hub-alerting 2025-08-18T00:42:28.401Z INFO utils/utils.go:193 creating secret, namespace: namespace-z95mlk, name: multicluster-global-hub-grafana-config 2025-08-18T00:42:28.406Z ERROR grafana/grafana_reconciler.go:268 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:268 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:367 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:42:28.855Z INFO utils/utils.go:163 creating configmap, namespace: namespace-r66cfd, name: multicluster-global-hub-alerting 2025-08-18T00:42:28.859Z INFO utils/utils.go:193 creating secret, namespace: namespace-r66cfd, name: multicluster-global-hub-grafana-config null •{ "components": { "multicluster-global-hub-grafana": { "name": "multicluster-global-hub-grafana", "kind": "Deployment", "type": "Available", "status": "False", "lastTransitionTime": "2025-08-18T00:42:28Z", "reason": "MinimumReplicasUnavailable", "message": "Component multicluster-global-hub-grafana has been deployed but is not ready" } }, "phase": "" } •2025-08-18T00:42:29.582Z INFO inventory/spicedb_reconciler.go:69 start spiceDB controller 2025-08-18T00:42:29.582Z INFO controller/controller.go:175 Starting EventSource {"controller": "spicedb-reconciler", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:42:29.582Z INFO controller/controller.go:175 Starting EventSource {"controller": "spicedb-reconciler", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Deployment"} 2025-08-18T00:42:29.582Z INFO controller/controller.go:183 Starting Controller {"controller": "spicedb-reconciler", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:29.582Z INFO controller/controller.go:217 Starting workers {"controller": "spicedb-reconciler", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:42:29.710Z INFO controller/controller.go:175 Starting EventSource {"controller": "spicedb-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:42:29.710Z INFO controller/controller.go:175 Starting EventSource {"controller": "spicedb-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Secret"} 2025-08-18T00:42:29.710Z INFO controller/controller.go:175 Starting EventSource {"controller": "spicedb-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha1.SpiceDBCluster"} 2025-08-18T00:42:29.710Z INFO controller/controller.go:183 Starting Controller {"controller": "spicedb-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:29.820Z INFO controller/controller.go:217 Starting workers {"controller": "spicedb-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:42:29.865Z INFO inventory/spicedb_controller.go:341 spicedb cluster is created spicedb 2025-08-18T00:42:29.865Z ERROR controller/controller.go:316 Reconciler error {"controller": "spicedb-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-s8t6l7"}, "namespace": "namespace-s8t6l7", "name": "test-mgh", "reconcileID": "c0a105fb-9398-4bb7-83c9-1fd7942671f2", "error": "failed to create spicedb cluster: resource name may not be empty"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:29.952Z ERROR grafana/grafana_reconciler.go:268 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:268 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:337 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:29.952Z ERROR controller/controller.go:316 Reconciler error {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-s8t6l7"}, "namespace": "namespace-s8t6l7", "name": "test-mgh", "reconcileID": "e069bb91-6b5d-4d2b-97c0-7b5dfaf8b3a3", "error": "failed to create/update grafana objects: configmaps \"grafana-dashboards\" is forbidden: unable to create new content in namespace namespace-s8t6l7 because it is being terminated"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:29.959Z ERROR inventory/spicedb_controller.go:175 failed to manipulate spicedb realtions api objects: deployments.apps "relations-api" is forbidden: unable to create new content in namespace namespace-s8t6l7 because it is being terminated github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*spiceDBClusterReconciler).reconcileRelationsAPI /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/spicedb_controller.go:175 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*spiceDBClusterReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/spicedb_controller.go:126 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:29.959Z ERROR controller/controller.go:316 Reconciler error {"controller": "spicedb-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-s8t6l7"}, "namespace": "namespace-s8t6l7", "name": "test-mgh", "reconcileID": "5715c076-fd6d-4462-89fc-f16ab6687f9b", "error": "failed to reconcile relations api: deployments.apps \"relations-api\" is forbidden: unable to create new content in namespace namespace-s8t6l7 because it is being terminated"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:42:29.962Z INFO controller/controller.go:175 Starting EventSource {"controller": "AddonsController", "controllerGroup": "addon.open-cluster-management.io", "controllerKind": "ClusterManagementAddOn", "source": "kind source: *v1alpha1.ClusterManagementAddOn"} 2025-08-18T00:42:29.962Z INFO controller/controller.go:175 Starting EventSource {"controller": "AddonsController", "controllerGroup": "addon.open-cluster-management.io", "controllerKind": "ClusterManagementAddOn", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:42:29.962Z INFO controller/controller.go:183 Starting Controller {"controller": "AddonsController", "controllerGroup": "addon.open-cluster-management.io", "controllerKind": "ClusterManagementAddOn"} 2025-08-18T00:42:29.983Z INFO mceaddons/mce_addons_controller.go:60 start mce addons controller 2025-08-18T00:42:30.065Z INFO controller/controller.go:217 Starting workers {"controller": "AddonsController", "controllerGroup": "addon.open-cluster-management.io", "controllerKind": "ClusterManagementAddOn", "worker count": 1} 2025-08-18T00:42:30.066Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /work-manager not found, skip reconcile 2025-08-18T00:42:30.066Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /cluster-proxy not found, skip reconcile 2025-08-18T00:42:30.066Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /managed-serviceaccount not found, skip reconcile 2025-08-18T00:42:30.416Z INFO utils/utils.go:163 creating configmap, namespace: test-mgh, name: multicluster-global-hub-alerting 2025-08-18T00:42:30.420Z INFO utils/utils.go:193 creating secret, namespace: test-mgh, name: multicluster-global-hub-grafana-config 2025-08-18T00:42:30.428Z ERROR grafana/grafana_reconciler.go:268 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:268 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:367 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:30.992Z INFO mceaddons/mce_addons_controller.go:168 Update ClusterManagementAddOn /work-manager 2025-08-18T00:42:30.999Z INFO mceaddons/mce_addons_controller.go:168 Update ClusterManagementAddOn /cluster-proxy 2025-08-18T00:42:31.010Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn mc-hosted/managed-serviceaccount not found, skip reconcile 2025/08/18 00:42:31 [ERROR] Failed to list ClusterManagementAddOn 2025-08-18T00:42:31.010Z INFO mceaddons/mce_addons_controller.go:168 Update ClusterManagementAddOn /managed-serviceaccount •2025-08-18T00:42:31.149Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /work-manager not found, skip reconcile 2025-08-18T00:42:31.150Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /cluster-proxy not found, skip reconcile 2025-08-18T00:42:31.150Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /managed-serviceaccount not found, skip reconcile 2025-08-18T00:42:31.164Z INFO controller/controller.go:175 Starting EventSource {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:42:31.164Z INFO controller/controller.go:175 Starting EventSource {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Deployment"} 2025-08-18T00:42:31.164Z INFO controller/controller.go:175 Starting EventSource {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Secret"} 2025-08-18T00:42:31.164Z INFO controller/controller.go:175 Starting EventSource {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Service"} 2025-08-18T00:42:31.164Z INFO controller/controller.go:175 Starting EventSource {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ServiceAccount"} 2025-08-18T00:42:31.164Z INFO controller/controller.go:183 Starting Controller {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:31.164Z INFO controller/controller.go:217 Starting workers {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:42:31.439Z INFO utils/utils.go:163 creating configmap, namespace: namespace-gt8wf8, name: multicluster-global-hub-alerting 2025-08-18T00:42:31.442Z INFO utils/utils.go:193 creating secret, namespace: namespace-gt8wf8, name: multicluster-global-hub-grafana-config 2025-08-18T00:42:31.448Z ERROR grafana/grafana_reconciler.go:268 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:268 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:367 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:32.456Z INFO config/transport_config.go:233 set the inventory clientCA - key: inventory-api-client-ca-certs 2025-08-18T00:42:32.456Z INFO config/transport_config.go:237 set the inventory clientCA - cert: inventory-api-client-ca-certs 2025-08-18T00:42:32.493Z ERROR inventory/inventory_reconciler.go:152 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:152 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:278 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:42:32.577Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /work-manager not found, skip reconcile 2025-08-18T00:42:32.577Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /cluster-proxy not found, skip reconcile 2025-08-18T00:42:32.577Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /managed-serviceaccount not found, skip reconcile 2025-08-18T00:42:32.597Z INFO manager/manager_reconciler.go:100 start manager controller 2025-08-18T00:42:32.597Z INFO manager/manager_reconciler.go:128 inited manager controller •2025-08-18T00:42:32.597Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:42:32.597Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Deployment"} 2025-08-18T00:42:32.597Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Service"} 2025-08-18T00:42:32.597Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ServiceAccount"} 2025-08-18T00:42:32.597Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ClusterRole"} 2025-08-18T00:42:32.597Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ClusterRoleBinding"} 2025-08-18T00:42:32.597Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Role"} 2025-08-18T00:42:32.597Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.RoleBinding"} 2025-08-18T00:42:32.597Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Route"} 2025-08-18T00:42:32.597Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha1.ClusterManagementAddOn"} 2025-08-18T00:42:32.597Z INFO controller/controller.go:183 Starting Controller {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:32.705Z INFO controller/controller.go:217 Starting workers {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:42:32.705Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod /go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:32.705Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-q59m4p"}, "namespace": "namespace-q59m4p", "name": "test-mgh", "reconcileID": "d1df6dcc-edcd-4578-b3fb-348d529a0f4f", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 1222 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x37268c8, 0xc00238a120}, {0x2b54860, 0x533b0f0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x2b54860?, 0x533b0f0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod({0x37268c8, 0xc00238a120}, {0x0, 0x0}, {0xc0013b1940, 0x10}, {0x318bd30?, 0x1?})\n\t/go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 +0xbd\ngithub.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile(0xc002931440, {0x37268c8, 0xc00238a120}, {{{0x0?, 0x312b712?}, {0x5?, 0x100?}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 +0x10aa\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc000d1b6b0?, {0x37268c8?, 0xc00238a120?}, {{{0xc0013b1940?, 0x0?}, {0xc0013b1910?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x3747160, {0x3726900, 0xc0009fcdc0}, {{{0xc0013b1940, 0x10}, {0xc0013b1910, 0x8}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x3747160, {0x3726900, 0xc0009fcdc0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 1141\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod /go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:32.705Z ERROR controller/controller.go:316 Reconciler error {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-q59m4p"}, "namespace": "namespace-q59m4p", "name": "test-mgh", "reconcileID": "d1df6dcc-edcd-4578-b3fb-348d529a0f4f", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:32.989Z ERROR grafana/grafana_reconciler.go:268 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:268 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:344 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:32.989Z ERROR controller/controller.go:316 Reconciler error {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-q59m4p"}, "namespace": "namespace-q59m4p", "name": "test-mgh", "reconcileID": "f39a921c-d37c-418e-becd-8338ec888c20", "error": "failed to generate grafana datasource secret: failed to get password from database_uri: test-url"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:33.075Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:348 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:42:33.238Z INFO KubeAPIWarningLogger log/warning_handler.go:65 metadata.finalizers: "fz": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers 2025-08-18T00:42:33.262Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /work-manager not found, skip reconcile 2025-08-18T00:42:33.262Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /cluster-proxy not found, skip reconcile 2025-08-18T00:42:33.263Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /managed-serviceaccount not found, skip reconcile 2025-08-18T00:42:33.322Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:348 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:33.325Z ERROR grafana/grafana_reconciler.go:268 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:268 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:344 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:33.325Z ERROR controller/controller.go:316 Reconciler error {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-q59m4p"}, "namespace": "namespace-q59m4p", "name": "test-mgh", "reconcileID": "1311575f-7ba5-4ec4-8cd8-81c4f8dd115d", "error": "failed to generate grafana datasource secret: failed to get password from database_uri: test-url"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:33.370Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:348 github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers.init.func5.4 /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/manager_test.go:139 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/node.go:475 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/suite.go:894 2025-08-18T00:42:33.428Z INFO manager/manager_reconciler.go:362 removing the migration resources 2025-08-18T00:42:33.496Z ERROR controller_certificates certificates/certificates.go:246 Failed to create secret {"name": "inventory-api-server-certs", "error": "secrets \"inventory-api-server-certs\" is forbidden: unable to create new content in namespace namespace-q59m4p because it is being terminated"} github.com/stolostron/multicluster-global-hub/operator/pkg/certificates.createCertSecret /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/certificates/certificates.go:246 github.com/stolostron/multicluster-global-hub/operator/pkg/certificates.CreateInventoryCerts /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/certificates/certificates.go:80 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:164 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:33.496Z ERROR inventory/inventory_reconciler.go:152 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:152 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:165 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:33.496Z ERROR controller/controller.go:316 Reconciler error {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-q59m4p"}, "namespace": "namespace-q59m4p", "name": "test-mgh", "reconcileID": "16d51e37-2a95-42c5-b072-596ebbde137f", "error": "secrets \"inventory-api-server-certs\" is forbidden: unable to create new content in namespace namespace-q59m4p because it is being terminated"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:42:33.512Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:42:33.513Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Secret"} 2025-08-18T00:42:33.513Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:42:33.513Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.StatefulSet"} 2025-08-18T00:42:33.513Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ServiceAccount"} 2025-08-18T00:42:33.513Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.PrometheusRule"} 2025-08-18T00:42:33.513Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ServiceMonitor"} 2025-08-18T00:42:33.513Z INFO controller/controller.go:183 Starting Controller {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:33.513Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /managed-serviceaccount not found, skip reconcile 2025-08-18T00:42:33.513Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /work-manager not found, skip reconcile 2025-08-18T00:42:33.513Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /cluster-proxy not found, skip reconcile 2025-08-18T00:42:33.594Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:348 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:33.594Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod /go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:33.594Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"multicluster-global-hub-manager","namespace":"namespace-jxfz75"}, "namespace": "namespace-jxfz75", "name": "multicluster-global-hub-manager", "reconcileID": "f466e91a-c6ee-4a92-9678-a71f041af459", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 1222 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x37268c8, 0xc0027f9b30}, {0x2b54860, 0x533b0f0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x2b54860?, 0x533b0f0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod({0x37268c8, 0xc0027f9b30}, {0x0, 0x0}, {0xc00130e7f0, 0x10}, {0x318bd30?, 0x1?})\n\t/go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 +0xbd\ngithub.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile(0xc002931440, {0x37268c8, 0xc0027f9b30}, {{{0x0?, 0x312b712?}, {0x5?, 0x100?}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 +0x10aa\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc0027f9aa0?, {0x37268c8?, 0xc0027f9b30?}, {{{0xc001129640?, 0x0?}, {0xc002734540?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x3747160, {0x3726900, 0xc0009fcdc0}, {{{0xc001129640, 0x10}, {0xc002734540, 0x1f}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x3747160, {0x3726900, 0xc0009fcdc0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 1141\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod /go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:33.594Z ERROR controller/controller.go:316 Reconciler error {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"multicluster-global-hub-manager","namespace":"namespace-jxfz75"}, "namespace": "namespace-jxfz75", "name": "multicluster-global-hub-manager", "reconcileID": "f466e91a-c6ee-4a92-9678-a71f041af459", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:33.613Z INFO controller/controller.go:217 Starting workers {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:42:33.748Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:348 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:33.872Z ERROR storage/storage_reconciler.go:214 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:214 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:240 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:33.872Z ERROR storage/storage_reconciler.go:214 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:214 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:240 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:33.872Z ERROR storage/storage_reconciler.go:214 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:214 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:240 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:33.872Z ERROR storage/storage_reconciler.go:214 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:214 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:240 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:33.921Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:348 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:33.983Z INFO utils/utils.go:163 creating configmap, namespace: namespace-jxfz75, name: multicluster-global-hub-alerting 2025-08-18T00:42:33.984Z ERROR storage/storage_reconciler.go:214 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:214 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:240 github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers.init.func7.1 /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/storage_test.go:78 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/node.go:475 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/suite.go:894 2025-08-18T00:42:33.992Z INFO controller/controller.go:175 Starting EventSource {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:42:33.992Z INFO controller/controller.go:175 Starting EventSource {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap", "source": "kind source: *v1.Secret"} 2025-08-18T00:42:33.992Z INFO controller/controller.go:183 Starting Controller {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:42:33.992Z INFO controller/controller.go:217 Starting workers {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap", "worker count": 1} 2025-08-18T00:42:33.993Z INFO utils/utils.go:193 creating secret, namespace: namespace-jxfz75, name: multicluster-global-hub-grafana-config 2025-08-18T00:42:33.999Z ERROR grafana/grafana_reconciler.go:268 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:268 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:367 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:34.042Z INFO storage/postgres_user_reconciler.go:358 create postgres user: test-user1 2025-08-18T00:42:34.183Z INFO storage/postgres_user_reconciler.go:322 database test1 created. 2025-08-18T00:42:34.206Z INFO storage/postgres_user_reconciler.go:305 granted all privileges to user test-user1 on database test1. 2025-08-18T00:42:34.224Z ERROR grafana/grafana_reconciler.go:268 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:268 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:367 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:34.328Z INFO storage/postgres_user_reconciler.go:322 database test-2 created. 2025-08-18T00:42:34.349Z INFO storage/postgres_user_reconciler.go:305 granted all privileges to user test-user1 on database test-2. 2025-08-18T00:42:34.352Z INFO storage/postgres_user_reconciler.go:242 create the postgresql user secret: postgresql-user-test-user1 2025-08-18T00:42:34.352Z INFO storage/postgres_user_reconciler.go:149 applied the postgresql users successfully! 2025-08-18T00:42:34.476Z ERROR grafana/grafana_reconciler.go:268 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:268 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:367 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 { "metadata": { "name": "postgresql-user-test-user1", "namespace": "namespace-jxfz75", "uid": "af800841-5b9f-43f1-ade6-03ef3c3e8535", "resourceVersion": "673", "creationTimestamp": "2025-08-18T00:42:34Z", "labels": { "global-hub.open-cluster-management.io/managed-by": "multicluster-global-hub-custom-postgresql-users" }, "ownerReferences": [ { "apiVersion": "operator.open-cluster-management.io/v1alpha4", "kind": "MulticlusterGlobalHub", "name": "test-mgh", "uid": "92e83aab-2e85-460f-b004-5eeb71b5b9f7", "controller": true, "blockOwnerDeletion": true } ], "managedFields": [ { "manager": "controllers.test", "operation": "Update", "apiVersion": "v1", "time": "2025-08-18T00:42:34Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:data": { ".": {}, "f:db.ca_cert": {}, "f:db.host": {}, "f:db.names": {}, "f:db.password": {}, "f:db.port": {}, "f:db.user": {} }, "f:metadata": { "f:labels": { ".": {}, "f:global-hub.open-cluster-management.io/managed-by": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"92e83aab-2e85-460f-b004-5eeb71b5b9f7\"}": {} } }, "f:type": {} } } ] }, "data": { "db.ca_cert": "", "db.host": "bG9jYWxob3N0", "db.names": "WyJ0ZXN0MSIsICJ0ZXN0LTIiXQ==", "db.password": "MTY3Y2Q4NjViZmFh", "db.port": "MTk0Mjg=", "db.user": "dGVzdC11c2VyMQ==" }, "type": "Opaque" } 2025-08-18T00:42:34.493Z INFO storage/postgres_user_reconciler.go:351 postgres user 'test-user1' already exists 2025-08-18T00:42:34.493Z INFO storage/postgres_user_reconciler.go:324 database test1 already exists. 2025-08-18T00:42:34.504Z INFO storage/postgres_user_reconciler.go:305 granted all privileges to user test-user1 on database test1. 2025-08-18T00:42:34.506Z INFO storage/postgres_user_reconciler.go:324 database test-2 already exists. 2025-08-18T00:42:34.522Z INFO storage/postgres_user_reconciler.go:305 granted all privileges to user test-user1 on database test-2. 2025-08-18T00:42:34.522Z INFO storage/postgres_user_reconciler.go:252 the postgresql user secret already exists: postgresql-user-test-user1 2025-08-18T00:42:34.528Z INFO storage/postgres_user_reconciler.go:358 create postgres user: test_user2 2025-08-18T00:42:34.591Z INFO storage/postgres_user_reconciler.go:322 database test3 created. 2025-08-18T00:42:34.606Z INFO storage/postgres_user_reconciler.go:305 granted all privileges to user test_user2 on database test3. 2025-08-18T00:42:34.676Z INFO storage/postgres_user_reconciler.go:322 database test_4 created. 2025-08-18T00:42:34.693Z INFO storage/postgres_user_reconciler.go:305 granted all privileges to user test_user2 on database test_4. 2025-08-18T00:42:34.700Z INFO storage/postgres_user_reconciler.go:242 create the postgresql user secret: postgresql-user-test-user2 2025-08-18T00:42:34.700Z INFO storage/postgres_user_reconciler.go:149 applied the postgresql users successfully! { "metadata": { "name": "postgresql-user-test-user2", "namespace": "namespace-jxfz75", "uid": "7a564ff3-cd04-47b4-aa5a-21198c613bba", "resourceVersion": "677", "creationTimestamp": "2025-08-18T00:42:34Z", "labels": { "global-hub.open-cluster-management.io/managed-by": "multicluster-global-hub-custom-postgresql-users" }, "ownerReferences": [ { "apiVersion": "operator.open-cluster-management.io/v1alpha4", "kind": "MulticlusterGlobalHub", "name": "test-mgh", "uid": "92e83aab-2e85-460f-b004-5eeb71b5b9f7", "controller": true, "blockOwnerDeletion": true } ], "managedFields": [ { "manager": "controllers.test", "operation": "Update", "apiVersion": "v1", "time": "2025-08-18T00:42:34Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:data": { ".": {}, "f:db.ca_cert": {}, "f:db.host": {}, "f:db.names": {}, "f:db.password": {}, "f:db.port": {}, "f:db.user": {} }, "f:metadata": { "f:labels": { ".": {}, "f:global-hub.open-cluster-management.io/managed-by": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"92e83aab-2e85-460f-b004-5eeb71b5b9f7\"}": {} } }, "f:type": {} } } ] }, "data": { "db.ca_cert": "", "db.host": "bG9jYWxob3N0", "db.names": "WyJ0ZXN0MyIsICJ0ZXN0XzQiXQ==", "db.password": "MzMwNzhjZDc5Mjcz", "db.port": "MTk0Mjg=", "db.user": "dGVzdF91c2VyMg==" }, "type": "Opaque" } 2025-08-18T00:42:34.798Z INFO storage/postgres_statefulset.go:65 the postgres customized config: •2025-08-18T00:42:34.844Z ERROR controller_certificates certificates/certificates.go:246 Failed to create secret {"name": "inventory-api-guest-certs", "error": "secrets \"inventory-api-guest-certs\" is forbidden: unable to create new content in namespace namespace-jxfz75 because it is being terminated"} github.com/stolostron/multicluster-global-hub/operator/pkg/certificates.createCertSecret /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/certificates/certificates.go:246 github.com/stolostron/multicluster-global-hub/operator/pkg/certificates.CreateInventoryCerts /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/certificates/certificates.go:85 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:164 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:34.845Z ERROR inventory/inventory_reconciler.go:152 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:152 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:165 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:34.845Z ERROR controller/controller.go:316 Reconciler error {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-jxfz75"}, "namespace": "namespace-jxfz75", "name": "test-mgh", "reconcileID": "6ea9bf8d-c262-4905-9eda-8d3563e1712b", "error": "secrets \"inventory-api-guest-certs\" is forbidden: unable to create new content in namespace namespace-jxfz75 because it is being terminated"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:34.849Z ERROR storage/storage_reconciler.go:214 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:214 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:220 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:34.849Z ERROR controller/controller.go:316 Reconciler error {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"multicluster-global-hub-storage","namespace":"namespace-jxfz75"}, "namespace": "namespace-jxfz75", "name": "multicluster-global-hub-storage", "reconcileID": "166b8c71-9886-40d9-8174-3f14a581d6ee", "error": "storage not ready, Error: failed to create/update postgres objects: configmaps \"multicluster-global-hub-postgresql-init\" is forbidden: unable to create new content in namespace namespace-jxfz75 because it is being terminated"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:34.851Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /work-manager not found, skip reconcile 2025-08-18T00:42:34.851Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /cluster-proxy not found, skip reconcile 2025-08-18T00:42:34.852Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /managed-serviceaccount not found, skip reconcile 2025-08-18T00:42:35.010Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:348 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:35.081Z INFO storage/postgres_crunchy.go:98 waiting the postgres connection credential to be ready...messagepostgres guest user secret postgres-pguser-guest is nil 2025-08-18T00:42:35.082Z INFO storage/postgres_crunchy.go:98 waiting the postgres connection credential to be ready...messagepostgres guest user secret postgres-pguser-guest is nil 2025-08-18T00:42:35.098Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:348 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:42:35.204Z ERROR grafana/grafana_reconciler.go:268 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:268 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:337 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:35.204Z ERROR controller/controller.go:316 Reconciler error {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-g44jpq"}, "namespace": "namespace-g44jpq", "name": "test-mgh", "reconcileID": "35e34e59-8ebb-4ca6-867f-70fb6e87468b", "error": "failed to create/update grafana objects: serviceaccounts \"multicluster-global-hub-grafana\" is forbidden: unable to create new content in namespace namespace-g44jpq because it is being terminated"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:35.223Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /cluster-proxy not found, skip reconcile 2025-08-18T00:42:35.223Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /managed-serviceaccount not found, skip reconcile 2025-08-18T00:42:35.223Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /work-manager not found, skip reconcile 2025-08-18T00:42:35.235Z INFO storage/postgres_statefulset.go:65 the postgres customized config: wal_level = logical max_wal_size = 2GB 2025-08-18T00:42:35.348Z ERROR controller_certificates certificates/certificates.go:134 Failed to create secret {"name": "inventory-api-server-ca-certs", "error": "secrets \"inventory-api-server-ca-certs\" is forbidden: unable to create new content in namespace namespace-g44jpq because it is being terminated"} github.com/stolostron/multicluster-global-hub/operator/pkg/certificates.createCASecret /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/certificates/certificates.go:134 github.com/stolostron/multicluster-global-hub/operator/pkg/certificates.CreateInventoryCerts /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/certificates/certificates.go:65 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:164 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:35.349Z ERROR inventory/inventory_reconciler.go:152 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:152 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:165 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:35.349Z ERROR controller/controller.go:316 Reconciler error {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-jxfz75"}, "namespace": "namespace-jxfz75", "name": "test-mgh", "reconcileID": "87ae9f3f-5482-4841-9c80-4fe44c500c64", "error": "secrets \"inventory-api-server-ca-certs\" is forbidden: unable to create new content in namespace namespace-g44jpq because it is being terminated"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:35.379Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:348 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:35.427Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:348 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 "ssl = on\nssl_cert_file = '/opt/app-root/src/certs/tls.crt' # server certificate\nssl_key_file = '/opt/app-root/src/certs/tls.key' # server private key\nssl_min_protocol_version = TLSv1.3\nwal_level = logical\nmax_wal_size = 2GB\n" •2025-08-18T00:42:35.462Z ERROR grafana/grafana_reconciler.go:268 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "test-mgh" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:268 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:337 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:35.463Z ERROR controller/controller.go:316 Reconciler error {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-g44jpq"}, "namespace": "namespace-g44jpq", "name": "test-mgh", "reconcileID": "02da85d3-e7e5-4960-97f8-5078ccb87c09", "error": "failed to create/update grafana objects: serviceaccounts \"multicluster-global-hub-grafana\" is forbidden: unable to create new content in namespace namespace-rzbv5g because it is being terminated"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:35.468Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /work-manager not found, skip reconcile 2025-08-18T00:42:35.468Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /cluster-proxy not found, skip reconcile 2025-08-18T00:42:35.468Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /managed-serviceaccount not found, skip reconcile { "BootstrapServer": "localhost:test", "StatusTopic": "gh-status", "SpecTopic": "gh-spec", "ClusterID": "localhost:test", "CACert": "Y2EuY3J0", "ClientCert": "Y2xpZW50LmNydA==", "ClientKey": "Y2xpZW50LmtleQ==", "CASecretName": "", "ClientSecretName": "" } •2025-08-18T00:42:35.609Z ERROR controller_certificates certificates/certificates.go:134 Failed to create secret {"name": "inventory-api-server-ca-certs", "error": "secrets \"inventory-api-server-ca-certs\" is forbidden: unable to create new content in namespace namespace-rzbv5g because it is being terminated"} github.com/stolostron/multicluster-global-hub/operator/pkg/certificates.createCASecret /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/certificates/certificates.go:134 github.com/stolostron/multicluster-global-hub/operator/pkg/certificates.CreateInventoryCerts /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/certificates/certificates.go:65 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:164 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:35.615Z ERROR controller/controller.go:316 Reconciler error {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-g44jpq"}, "namespace": "namespace-g44jpq", "name": "test-mgh", "reconcileID": "1c6c9a67-1541-4816-97ff-458bccd86796", "error": "secrets \"inventory-api-server-ca-certs\" is forbidden: unable to create new content in namespace namespace-rzbv5g because it is being terminated"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:35.773Z INFO utils/utils.go:163 creating configmap, namespace: namespace-gbvxgw, name: multicluster-global-hub-alerting 2025-08-18T00:42:35.775Z INFO utils/utils.go:193 creating secret, namespace: namespace-gbvxgw, name: multicluster-global-hub-grafana-config 2025-08-18T00:42:36.036Z INFO protocol/strimzi_kafka_controller.go:173 start kafka controller 2025-08-18T00:42:36.037Z INFO protocol/strimzi_kafka_controller.go:194 kafka controller is started 2025-08-18T00:42:36.037Z INFO controller/controller.go:175 Starting EventSource {"controller": "strimzi_controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:42:36.037Z INFO controller/controller.go:175 Starting EventSource {"controller": "strimzi_controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1beta2.Kafka"} 2025-08-18T00:42:36.037Z INFO controller/controller.go:175 Starting EventSource {"controller": "strimzi_controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1beta2.KafkaUser"} 2025-08-18T00:42:36.037Z INFO controller/controller.go:175 Starting EventSource {"controller": "strimzi_controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1beta2.KafkaTopic"} 2025-08-18T00:42:36.037Z INFO controller/controller.go:183 Starting Controller {"controller": "strimzi_controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:36.138Z INFO controller/controller.go:217 Starting workers {"controller": "strimzi_controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:42:36.170Z INFO protocol/strimzi_transporter.go:685 kafka cluster is ready 2025-08-18T00:42:36.170Z INFO config/transport_config.go:255 set the ca - client key: kafka-clients-ca 2025-08-18T00:42:36.170Z INFO config/transport_config.go:271 set the ca - client cert: kafka-clients-ca-cert 2025-08-18T00:42:36.186Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-gbvxgw"}, "namespace": "namespace-gbvxgw", "name": "test-mgh", "reconcileID": "a9eebe14-5043-48f5-9f80-c3092645813f", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 1222 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x37268c8, 0xc00397e030}, {0x2b54860, 0x533b0f0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x2b54860?, 0x533b0f0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod({0x37268c8, 0xc00397e030}, {0x0, 0x0}, {0xc00327df70, 0x10}, {0x318bd30?, 0xffffffffffffffff?})\n\t/go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 +0xbd\ngithub.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile(0xc002931440, {0x37268c8, 0xc00397e030}, {{{0x0?, 0x312b712?}, {0x5?, 0x100?}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 +0x10aa\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc00356ff20?, {0x37268c8?, 0xc00397e030?}, {{{0xc00327df70?, 0x0?}, {0xc00327df68?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x3747160, {0x3726900, 0xc0009fcdc0}, {{{0xc00327df70, 0x10}, {0xc00327df68, 0x8}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x3747160, {0x3726900, 0xc0009fcdc0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 1141\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod /go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:36.187Z ERROR controller/controller.go:316 Reconciler error {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-gbvxgw"}, "namespace": "namespace-gbvxgw", "name": "test-mgh", "reconcileID": "a9eebe14-5043-48f5-9f80-c3092645813f", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:42:36.244Z INFO protocol/strimzi_transporter.go:685 kafka cluster is ready 2025-08-18T00:42:36.246Z INFO config/transport_config.go:255 set the ca - client key: kafka-clients-ca 2025-08-18T00:42:36.246Z INFO config/transport_config.go:271 set the ca - client cert: kafka-clients-ca-cert 2025-08-18T00:42:36.293Z INFO protocol/strimzi_transporter.go:685 kafka cluster is ready 2025-08-18T00:42:36.293Z INFO config/transport_config.go:255 set the ca - client key: kafka-clients-ca 2025-08-18T00:42:36.293Z INFO config/transport_config.go:271 set the ca - client cert: kafka-clients-ca-cert 2025-08-18T00:42:36.370Z INFO protocol/strimzi_transporter.go:369 create the kafakUser: hub1-kafka-user 2025-08-18T00:42:36.401Z INFO protocol/strimzi_transporter.go:685 kafka cluster is ready 2025-08-18T00:42:36.401Z INFO config/transport_config.go:255 set the ca - client key: kafka-clients-ca 2025-08-18T00:42:36.401Z INFO config/transport_config.go:271 set the ca - client cert: kafka-clients-ca-cert •2025-08-18T00:42:36.425Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:42:36.425Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:42:36.425Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "strimzi_controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:36.425Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:42:36.425Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:36.425Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:36.426Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:36.426Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "AddonsController", "controllerGroup": "addon.open-cluster-management.io", "controllerKind": "ClusterManagementAddOn"} 2025-08-18T00:42:36.426Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "spicedb-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:36.426Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "spicedb-reconciler", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:36.426Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:36.426Z ERROR storage/storage_reconciler.go:214 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "test-mgh" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:214 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:220 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:36.426Z ERROR controller/controller.go:316 Reconciler error {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-jxfz75"}, "namespace": "namespace-jxfz75", "name": "test-mgh", "reconcileID": "2f95d403-15cf-4774-8c8f-d77658fd8af1", "error": "storage not ready, Error: context canceled"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:36.426Z INFO controller/controller.go:239 All workers finished {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:36.426Z INFO controller/controller.go:239 All workers finished {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:42:36.426Z INFO controller/controller.go:239 All workers finished {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:36.426Z INFO controller/controller.go:239 All workers finished {"controller": "AddonsController", "controllerGroup": "addon.open-cluster-management.io", "controllerKind": "ClusterManagementAddOn"} 2025-08-18T00:42:36.426Z INFO controller/controller.go:239 All workers finished {"controller": "spicedb-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:36.426Z INFO controller/controller.go:239 All workers finished {"controller": "spicedb-reconciler", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:36.426Z INFO controller/controller.go:239 All workers finished {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:36.467Z INFO protocol/strimzi_transporter.go:685 kafka cluster is ready 2025-08-18T00:42:36.467Z INFO config/transport_config.go:255 set the ca - client key: kafka-clients-ca 2025-08-18T00:42:36.467Z INFO config/transport_config.go:271 set the ca - client cert: kafka-clients-ca-cert 2025-08-18T00:42:36.467Z ERROR protocol/strimzi_kafka_controller.go:99 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "test-mgh" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/transporter/protocol.(*KafkaController).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/transporter/protocol/strimzi_kafka_controller.go:99 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/transporter/protocol.(*KafkaController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/transporter/protocol/strimzi_kafka_controller.go:136 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:36.467Z INFO controller/controller.go:239 All workers finished {"controller": "strimzi_controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:36.587Z ERROR controller_certificates certificates/certificates.go:134 Failed to create secret {"name": "inventory-api-server-ca-certs", "error": "secrets \"inventory-api-server-ca-certs\" is forbidden: unable to create new content in namespace namespace-gbvxgw because it is being terminated"} github.com/stolostron/multicluster-global-hub/operator/pkg/certificates.createCASecret /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/certificates/certificates.go:134 github.com/stolostron/multicluster-global-hub/operator/pkg/certificates.CreateInventoryCerts /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/certificates/certificates.go:65 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:164 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:36.588Z ERROR inventory/inventory_reconciler.go:152 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "test-mgh" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:152 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:165 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:36.588Z ERROR controller/controller.go:316 Reconciler error {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-rzbv5g"}, "namespace": "namespace-rzbv5g", "name": "test-mgh", "reconcileID": "7b4c6f49-fc17-49fb-aa78-b7191a9eb6c0", "error": "secrets \"inventory-api-server-ca-certs\" is forbidden: unable to create new content in namespace namespace-gbvxgw because it is being terminated"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:36.588Z INFO controller/controller.go:239 All workers finished {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:36.588Z INFO manager/internal.go:550 Stopping and waiting for caches 2025-08-18T00:42:36.588Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:42:36.588Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:42:36.588Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager 2025-08-18T00:42:36.588Z ERROR manager/internal.go:512 error received after stop sequence was engaged {"error": "leader election lost"} sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/manager/internal.go:512 waiting for server to shut down....2025-08-18 00:42:36.416 UTC [25309] LOG: received fast shutdown request 2025-08-18 00:42:36.416 UTC [25309] LOG: aborting any active transactions 2025-08-18 00:42:36.418 UTC [25309] LOG: background worker "logical replication launcher" (PID 25315) exited with exit code 1 2025-08-18 00:42:36.418 UTC [25310] LOG: shutting down 2025-08-18 00:42:36.418 UTC [25310] LOG: checkpoint starting: shutdown immediate 2025-08-18 00:42:36.516 UTC [25310] LOG: checkpoint complete: wrote 4691 buffers (28.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.084 s, sync=0.014 s, total=0.098 s; sync files=1680, longest=0.001 s, average=0.001 s; distance=22405 kB, estimate=22405 kB; lsn=0/2ABFCC8, redo lsn=0/2ABFCC8 2025-08-18 00:42:36.536 UTC [25309] LOG: database system is shut down done server stopped Ran 15 of 15 Specs in 23.476 seconds SUCCESS! -- 15 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestControllers (23.48s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers 23.538s === RUN TestControllers Running Suite: Controller Integration Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/agent =============================================================================================================================================== Random Seed: 1755477743 Will run 10 of 10 specs 2025-08-18T00:42:33.336Z INFO addon/addon_manager.go:66 start addon manager controller 2025-08-18T00:42:33.377Z INFO addon/addon_manager.go:130 starting addon manager 2025-08-18T00:42:33.377Z INFO addon/addon_manager.go:76 inited GlobalHubAddonManager controller 2025-08-18T00:42:33.377Z INFO addon/default_agent_controller.go:71 start default agent controller I0818 00:42:33.384975 25104 base_controller.go:34] Waiting for caches to sync for addon-deploy-controller I0818 00:42:33.385011 25104 base_controller.go:34] Waiting for caches to sync for addon-registration-controller I0818 00:42:33.385025 25104 base_controller.go:34] Waiting for caches to sync for cma-managed-by-controller I0818 00:42:33.385072 25104 base_controller.go:34] Waiting for caches to sync for CSRApprovingController I0818 00:42:33.385094 25104 base_controller.go:34] Waiting for caches to sync for CSRSignController 2025-08-18T00:42:33.398Z INFO addon/default_agent_controller.go:170 the default agent controller is started 2025-08-18T00:42:33.398Z INFO agent/local_agent_controller.go:48 start local agent controller 2025-08-18T00:42:33.401Z INFO controller/controller.go:175 Starting EventSource {"controller": "local-agent-reconciler", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:42:33.401Z INFO controller/controller.go:175 Starting EventSource {"controller": "local-agent-reconciler", "source": "kind source: *v1.ManagedCluster"} 2025-08-18T00:42:33.401Z INFO controller/controller.go:175 Starting EventSource {"controller": "local-agent-reconciler", "source": "kind source: *v1.Deployment"} 2025-08-18T00:42:33.401Z INFO controller/controller.go:175 Starting EventSource {"controller": "local-agent-reconciler", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:42:33.401Z INFO controller/controller.go:175 Starting EventSource {"controller": "local-agent-reconciler", "source": "kind source: *v1.ServiceAccount"} 2025-08-18T00:42:33.401Z INFO controller/controller.go:175 Starting EventSource {"controller": "local-agent-reconciler", "source": "kind source: *v1.ClusterRole"} 2025-08-18T00:42:33.401Z INFO controller/controller.go:175 Starting EventSource {"controller": "local-agent-reconciler", "source": "kind source: *v1.ClusterRoleBinding"} 2025-08-18T00:42:33.401Z INFO controller/controller.go:183 Starting Controller {"controller": "local-agent-reconciler"} 2025-08-18T00:42:33.403Z INFO controller/controller.go:175 Starting EventSource {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:42:33.403Z INFO controller/controller.go:175 Starting EventSource {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ManagedCluster"} 2025-08-18T00:42:33.405Z INFO controller/controller.go:175 Starting EventSource {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha1.ManagedClusterAddOn"} 2025-08-18T00:42:33.405Z INFO controller/controller.go:175 Starting EventSource {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha1.ClusterManagementAddOn"} 2025-08-18T00:42:33.405Z INFO controller/controller.go:175 Starting EventSource {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Secret"} 2025-08-18T00:42:33.405Z INFO controller/controller.go:183 Starting Controller {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} I0818 00:42:33.485983 25104 base_controller.go:40] Caches are synced for CSRSignController I0818 00:42:33.486021 25104 base_controller.go:78] Starting #1 worker of CSRSignController controller ... I0818 00:42:33.486047 25104 base_controller.go:40] Caches are synced for addon-registration-controller I0818 00:42:33.486052 25104 base_controller.go:78] Starting #1 worker of addon-registration-controller controller ... I0818 00:42:33.486061 25104 base_controller.go:40] Caches are synced for cma-managed-by-controller I0818 00:42:33.486065 25104 base_controller.go:78] Starting #1 worker of cma-managed-by-controller controller ... I0818 00:42:33.486154 25104 base_controller.go:40] Caches are synced for CSRApprovingController I0818 00:42:33.486163 25104 base_controller.go:78] Starting #1 worker of CSRApprovingController controller ... 2025-08-18T00:42:33.519Z INFO controller/controller.go:217 Starting workers {"controller": "local-agent-reconciler", "worker count": 1} 2025-08-18T00:42:33.519Z INFO addon/default_agent_controller.go:457 triggering all the addons/clusters: %d1 2025-08-18T00:42:33.527Z INFO agent/local_agent_controller.go:304 create transport secret transport-config-local-cluster for local agent 2025-08-18T00:42:33.527Z INFO controller/controller.go:217 Starting workers {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:42:33.527Z INFO addon/default_agent_controller.go:265 cluster(hub-n9zjtc): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:33.527Z INFO addon/default_agent_controller.go:311 creating resources and addon {"cluster": "hub-n9zjtc", "addon": "multicluster-global-hub-controller"} 2025-08-18T00:42:33.528Z INFO addon/default_agent_controller.go:457 triggering all the addons/clusters: %d1 2025-08-18T00:42:33.532Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "status.healthCheck" 2025-08-18T00:42:33.532Z INFO addon/default_agent_controller.go:248 not found the cluster test-mgh, the controller might triggered by multiclusterglboalhub 2025-08-18T00:42:33.532Z INFO addon/default_agent_controller.go:265 cluster(hub-n9zjtc): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:33.533Z INFO addon/default_agent_controller.go:311 creating resources and addon {"cluster": "hub-n9zjtc", "addon": "multicluster-global-hub-controller"} 2025-08-18T00:42:33.536Z INFO certificates/csr.go:17 specify the clientName(CN: hub-n9zjtc-kafka-user) for managed hub cluster(hub-n9zjtc) 2025-08-18T00:42:33.544Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-n9zjtc"}, "namespace": "", "name": "hub-n9zjtc", "reconcileID": "915d084f-7cca-40e2-be0f-209da565f366", "error": "managedclusteraddons.addon.open-cluster-management.io \"multicluster-global-hub-controller\" already exists"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:33.544Z INFO addon/default_agent_controller.go:265 cluster(hub-hosting-tpjtnx): isDetaching - false, hasDeployLabel - false 2025-08-18T00:42:33.544Z INFO addon/default_agent_controller.go:267 deleting resources and addon {"cluster": "hub-hosting-tpjtnx"} I0818 00:42:33.545203 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:42:33.545Z INFO certificates/csr.go:17 specify the clientName(CN: hub-n9zjtc-kafka-user) for managed hub cluster(hub-n9zjtc) 2025-08-18T00:42:33.551Z INFO addon/default_agent_controller.go:265 cluster(hub-n9zjtc): isDetaching - false, hasDeployLabel - true I0818 00:42:33.551985 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:42:33.552Z INFO addon/default_agent_controller.go:265 cluster(hub-hosting-tpjtnx): isDetaching - false, hasDeployLabel - false 2025-08-18T00:42:33.552Z INFO addon/default_agent_controller.go:267 deleting resources and addon {"cluster": "hub-hosting-tpjtnx"} I0818 00:42:34.485240 25104 base_controller.go:40] Caches are synced for addon-deploy-controller I0818 00:42:34.485283 25104 base_controller.go:78] Starting #1 worker of addon-deploy-controller controller ... I0818 00:42:34.512366 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" 2025-08-18T00:42:34.518Z INFO certificates/csr.go:17 specify the clientName(CN: hub-n9zjtc-kafka-user) for managed hub cluster(hub-n9zjtc) I0818 00:42:34.531397 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:42:34.554Z INFO certificates/csr.go:17 specify the clientName(CN: hub-n9zjtc-kafka-user) for managed hub cluster(hub-n9zjtc) I0818 00:42:34.556925 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" I0818 00:42:34.568253 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" I0818 00:42:34.582891 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" •2025-08-18T00:42:34.610Z INFO addon/default_agent_controller.go:265 cluster(hub-8tkw5m): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:34.610Z INFO addon/default_agent_controller.go:311 creating resources and addon {"cluster": "hub-8tkw5m", "addon": "multicluster-global-hub-controller"} 2025-08-18T00:42:34.667Z INFO addon/default_agent_controller.go:265 cluster(hub-hosting-ntxl64): isDetaching - false, hasDeployLabel - false 2025-08-18T00:42:34.667Z INFO addon/default_agent_controller.go:267 deleting resources and addon {"cluster": "hub-hosting-ntxl64"} 2025-08-18T00:42:34.667Z INFO addon/default_agent_controller.go:265 cluster(hub-8tkw5m): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:34.667Z INFO addon/default_agent_controller.go:311 creating resources and addon {"cluster": "hub-8tkw5m", "addon": "multicluster-global-hub-controller"} 2025-08-18T00:42:34.670Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-8tkw5m"}, "namespace": "", "name": "hub-8tkw5m", "reconcileID": "c0838208-c20b-4caf-a645-e88f03c15c96", "error": "managedclusteraddons.addon.open-cluster-management.io \"multicluster-global-hub-controller\" already exists"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:34.671Z INFO certificates/csr.go:17 specify the clientName(CN: hub-8tkw5m-kafka-user) for managed hub cluster(hub-8tkw5m) 2025-08-18T00:42:34.675Z INFO addon/default_agent_controller.go:265 cluster(hub-8tkw5m): isDetaching - false, hasDeployLabel - true I0818 00:42:34.679874 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:42:34.680Z INFO certificates/csr.go:17 specify the clientName(CN: hub-8tkw5m-kafka-user) for managed hub cluster(hub-8tkw5m) 2025-08-18T00:42:34.681Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-8tkw5m"} I0818 00:42:34.687084 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:42:34.697Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-8tkw5m"} 2025-08-18T00:42:34.717Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-8tkw5m"} I0818 00:42:34.728461 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" 2025-08-18T00:42:34.729Z INFO certificates/csr.go:17 specify the clientName(CN: hub-8tkw5m-kafka-user) for managed hub cluster(hub-8tkw5m) 2025-08-18T00:42:34.736Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-8tkw5m"} I0818 00:42:34.740284 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:42:34.741Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-8tkw5m"} 2025-08-18T00:42:34.750Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-8tkw5m"} I0818 00:42:34.756997 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" E0818 00:42:34.757286 25104 base_controller.go:159] "Unhandled Error" err="\"addon-deploy-controller\" controller failed to sync \"hub-8tkw5m/multicluster-global-hub-controller\", err: Operation cannot be fulfilled on managedclusteraddons.addon.open-cluster-management.io \"multicluster-global-hub-controller\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError" 2025-08-18T00:42:34.760Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-8tkw5m"} 2025-08-18T00:42:34.766Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-8tkw5m"} 2025-08-18T00:42:34.783Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-8tkw5m"} 2025-08-18T00:42:34.790Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-8tkw5m"} 2025-08-18T00:42:34.799Z INFO certificates/csr.go:17 specify the clientName(CN: hub-8tkw5m-kafka-user) for managed hub cluster(hub-8tkw5m) I0818 00:42:34.803663 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" I0818 00:42:34.804522 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:42:34.806Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-8tkw5m"} 2025-08-18T00:42:34.812Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-8tkw5m"} 2025-08-18T00:42:34.818Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-8tkw5m"} 2025-08-18T00:42:34.824Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-8tkw5m"} I0818 00:42:34.833082 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" •2025-08-18T00:42:34.929Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-dxnzss): isDetaching - false, hasDeployLabel - false 2025-08-18T00:42:34.929Z INFO addon/default_agent_controller.go:267 deleting resources and addon {"cluster": "hub-ocp-mode-none-dxnzss"} 2025-08-18T00:42:34.933Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-dxnzss): isDetaching - false, hasDeployLabel - false 2025-08-18T00:42:34.933Z INFO addon/default_agent_controller.go:267 deleting resources and addon {"cluster": "hub-ocp-mode-none-dxnzss"} 2025-08-18T00:42:34.952Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-no-condtion-zpntnd): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:34.952Z INFO addon/default_agent_controller.go:311 creating resources and addon {"cluster": "hub-ocp-no-condtion-zpntnd", "addon": "multicluster-global-hub-controller"} 2025-08-18T00:42:35.010Z INFO addon/default_agent_controller.go:265 cluster(local-cluster): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:35.010Z INFO addon/default_agent_controller.go:311 creating resources and addon {"cluster": "local-cluster", "addon": "multicluster-global-hub-controller"} 2025-08-18T00:42:35.010Z INFO certificates/csr.go:17 specify the clientName(CN: hub-ocp-no-condtion-zpntnd-kafka-user) for managed hub cluster(hub-ocp-no-condtion-zpntnd) I0818 00:42:35.015737 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:42:35.015Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-fbxcnb): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:35.015Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-fbxcnb"}, "namespace": "", "name": "hub-ocp-mode-none-fbxcnb", "reconcileID": "e77aca62-3778-43ce-9658-f91e553ef6e3", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-fbxcnb is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:35.016Z INFO certificates/csr.go:17 specify the clientName(CN: local-cluster-kafka-user) for managed hub cluster(local-cluster) 2025-08-18T00:42:35.021Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-fbxcnb): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:35.021Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-fbxcnb"}, "namespace": "", "name": "hub-ocp-mode-none-fbxcnb", "reconcileID": "02952e7f-ec13-4ec9-9dbd-627dcc342cb8", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-fbxcnb is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 I0818 00:42:35.021367 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:42:35.022Z INFO certificates/csr.go:17 specify the clientName(CN: hub-ocp-no-condtion-zpntnd-kafka-user) for managed hub cluster(hub-ocp-no-condtion-zpntnd) 2025-08-18T00:42:35.038Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-fbxcnb): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:35.038Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-fbxcnb"}, "namespace": "", "name": "hub-ocp-mode-none-fbxcnb", "reconcileID": "20fa0918-20ce-465a-ae27-9a82d4019ed2", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-fbxcnb is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 I0818 00:42:35.041782 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" I0818 00:42:35.042283 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" E0818 00:42:35.042450 25104 base_controller.go:159] "Unhandled Error" err="\"addon-registration-controller\" controller failed to sync \"hub-ocp-no-condtion-zpntnd/multicluster-global-hub-controller\", err: Operation cannot be fulfilled on managedclusteraddons.addon.open-cluster-management.io \"multicluster-global-hub-controller\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError" 2025-08-18T00:42:35.043Z INFO certificates/csr.go:17 specify the clientName(CN: local-cluster-kafka-user) for managed hub cluster(local-cluster) I0818 00:42:35.048350 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:42:35.048Z INFO certificates/csr.go:17 specify the clientName(CN: hub-ocp-no-condtion-zpntnd-kafka-user) for managed hub cluster(hub-ocp-no-condtion-zpntnd) I0818 00:42:35.052486 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:42:35.052Z INFO certificates/csr.go:17 specify the clientName(CN: hub-ocp-no-condtion-zpntnd-kafka-user) for managed hub cluster(hub-ocp-no-condtion-zpntnd) I0818 00:42:35.057417 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:42:35.058Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-fbxcnb): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:35.058Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-fbxcnb"}, "namespace": "", "name": "hub-ocp-mode-none-fbxcnb", "reconcileID": "6591ed22-76ec-42a1-b115-aad7e57c0f70", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-fbxcnb is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 I0818 00:42:35.068861 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" 2025-08-18T00:42:35.069Z INFO certificates/csr.go:17 specify the clientName(CN: local-cluster-kafka-user) for managed hub cluster(local-cluster) I0818 00:42:35.076366 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" I0818 00:42:35.095680 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" 2025-08-18T00:42:35.099Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-fbxcnb): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:35.099Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-fbxcnb"}, "namespace": "", "name": "hub-ocp-mode-none-fbxcnb", "reconcileID": "445f5841-9e0c-492a-afc4-fddde8aaa4ad", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-fbxcnb is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 I0818 00:42:35.123345 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" 2025-08-18T00:42:35.196Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-fbxcnb): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:35.202Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-fbxcnb"}, "namespace": "", "name": "hub-ocp-mode-none-fbxcnb", "reconcileID": "e98bc263-4efc-4ce9-b320-6c06377d1995", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-fbxcnb is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:35.363Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-fbxcnb): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:35.363Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-fbxcnb"}, "namespace": "", "name": "hub-ocp-mode-none-fbxcnb", "reconcileID": "40047ede-1495-472c-8234-1740f76280d9", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-fbxcnb is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:35.684Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-fbxcnb): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:35.684Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-fbxcnb"}, "namespace": "", "name": "hub-ocp-mode-none-fbxcnb", "reconcileID": "4ae6cc3e-3994-4235-8ebd-35f779ee20aa", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-fbxcnb is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:36.324Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-fbxcnb): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:36.325Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-fbxcnb"}, "namespace": "", "name": "hub-ocp-mode-none-fbxcnb", "reconcileID": "769a8976-51de-4b88-be56-29713443cb53", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-fbxcnb is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:37.605Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-fbxcnb): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:37.605Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-fbxcnb"}, "namespace": "", "name": "hub-ocp-mode-none-fbxcnb", "reconcileID": "e5531428-937c-44e0-95d5-3eeae6ff13cc", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-fbxcnb is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:40.166Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-fbxcnb): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:40.166Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-fbxcnb"}, "namespace": "", "name": "hub-ocp-mode-none-fbxcnb", "reconcileID": "0462998e-2890-452a-ace5-92012c1796f6", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-fbxcnb is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:42:41.270Z INFO agent/local_agent_controller.go:163 local cluster name changed from local-cluster to hub-snfs6l 2025-08-18T00:42:41.291Z INFO addon/default_agent_controller.go:248 not found the cluster test-mgh, the controller might triggered by multiclusterglboalhub •2025-08-18T00:42:41.294Z INFO addon/default_agent_controller.go:265 cluster(hub-ktgpsf): isDetaching - false, hasDeployLabel - false 2025-08-18T00:42:41.294Z INFO addon/default_agent_controller.go:267 deleting resources and addon {"cluster": "hub-ktgpsf"} 2025-08-18T00:42:41.298Z INFO addon/default_agent_controller.go:265 cluster(hub-ktgpsf): isDetaching - false, hasDeployLabel - false 2025-08-18T00:42:41.298Z INFO addon/default_agent_controller.go:267 deleting resources and addon {"cluster": "hub-ktgpsf"} •2025-08-18T00:42:41.307Z INFO addon/default_agent_controller.go:265 cluster(hub-4r9fth): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:41.307Z INFO addon/default_agent_controller.go:311 creating resources and addon {"cluster": "hub-4r9fth", "addon": "multicluster-global-hub-controller"} 2025-08-18T00:42:41.308Z INFO agent/local_agent_controller.go:304 create transport secret transport-config-local-cluster for local agent 2025-08-18T00:42:41.361Z INFO certificates/csr.go:17 specify the clientName(CN: hub-4r9fth-kafka-user) for managed hub cluster(hub-4r9fth) 2025-08-18T00:42:41.362Z INFO addon/default_agent_controller.go:265 cluster(hub-4r9fth): isDetaching - false, hasDeployLabel - true I0818 00:42:41.366817 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:42:41.367Z INFO certificates/csr.go:17 specify the clientName(CN: hub-4r9fth-kafka-user) for managed hub cluster(hub-4r9fth) I0818 00:42:41.370610 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" I0818 00:42:41.398881 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" 2025-08-18T00:42:41.399Z INFO certificates/csr.go:17 specify the clientName(CN: hub-4r9fth-kafka-user) for managed hub cluster(hub-4r9fth) I0818 00:42:41.407351 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" I0818 00:42:41.416242 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" E0818 00:42:41.416336 25104 base_controller.go:159] "Unhandled Error" err="\"addon-deploy-controller\" controller failed to sync \"hub-4r9fth/multicluster-global-hub-controller\", err: Operation cannot be fulfilled on managedclusteraddons.addon.open-cluster-management.io \"multicluster-global-hub-controller\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError" I0818 00:42:41.440271 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" I0818 00:42:41.454605 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" •2025-08-18T00:42:41.586Z INFO addon/default_agent_controller.go:265 cluster(hub-xx5nxm): isDetaching - false, hasDeployLabel - true 2025-08-18T00:42:41.586Z INFO addon/default_agent_controller.go:311 creating resources and addon {"cluster": "hub-xx5nxm", "addon": "multicluster-global-hub-controller"} 2025-08-18T00:42:41.640Z INFO certificates/csr.go:17 specify the clientName(CN: hub-xx5nxm-kafka-user) for managed hub cluster(hub-xx5nxm) 2025-08-18T00:42:41.641Z INFO addon/default_agent_controller.go:265 cluster(hub-xx5nxm): isDetaching - false, hasDeployLabel - true I0818 00:42:41.648109 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:42:41.648Z INFO certificates/csr.go:17 specify the clientName(CN: hub-xx5nxm-kafka-user) for managed hub cluster(hub-xx5nxm) 2025-08-18T00:42:41.649Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-xx5nxm"} I0818 00:42:41.653336 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:42:41.664Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-xx5nxm"} 2025-08-18T00:42:41.673Z INFO certificates/csr.go:17 specify the clientName(CN: hub-xx5nxm-kafka-user) for managed hub cluster(hub-xx5nxm) I0818 00:42:41.674178 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" 2025-08-18T00:42:41.676Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-xx5nxm"} I0818 00:42:41.678222 25104 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:42:41.683Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-xx5nxm"} I0818 00:42:41.691623 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" 2025-08-18T00:42:41.693Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-xx5nxm"} 2025-08-18T00:42:41.700Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-xx5nxm"} I0818 00:42:41.708715 25104 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" ••2025-08-18T00:42:41.888Z INFO agent/local_agent_controller.go:163 local cluster name changed from local-cluster to local-cluster-new 2025-08-18T00:42:41.912Z INFO agent/local_agent_controller.go:304 create transport secret transport-config-local-cluster-new for local agent •2025-08-18T00:42:42.891Z INFO addon/default_agent_controller.go:248 not found the cluster test-mgh, the controller might triggered by multiclusterglboalhub •I0818 00:42:43.900021 25104 base_controller.go:107] Shutting down CSRApprovingController ... I0818 00:42:43.900059 25104 base_controller.go:107] Shutting down addon-deploy-controller ... I0818 00:42:43.900078 25104 base_controller.go:82] Shutting down worker of addon-deploy-controller controller ... I0818 00:42:43.900093 25104 base_controller.go:72] All addon-deploy-controller workers have been terminated I0818 00:42:43.900093 25104 base_controller.go:107] Shutting down cma-managed-by-controller ... I0818 00:42:43.900108 25104 base_controller.go:82] Shutting down worker of cma-managed-by-controller controller ... I0818 00:42:43.900109 25104 base_controller.go:82] Shutting down worker of CSRApprovingController controller ... I0818 00:42:43.900149 25104 base_controller.go:82] Shutting down worker of addon-registration-controller controller ... I0818 00:42:43.900152 25104 base_controller.go:72] All CSRApprovingController workers have been terminated I0818 00:42:43.900121 25104 base_controller.go:107] Shutting down addon-registration-controller ... I0818 00:42:43.900161 25104 base_controller.go:82] Shutting down worker of CSRSignController controller ... I0818 00:42:43.900166 25104 base_controller.go:72] All addon-registration-controller workers have been terminated I0818 00:42:43.900159 25104 base_controller.go:107] Shutting down CSRSignController ... I0818 00:42:43.900118 25104 base_controller.go:72] All cma-managed-by-controller workers have been terminated I0818 00:42:43.900173 25104 base_controller.go:72] All CSRSignController workers have been terminated 2025-08-18T00:42:43.900Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:42:43.900Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:42:43.900Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:43.900Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "local-agent-reconciler"} 2025-08-18T00:42:43.900Z INFO controller/controller.go:239 All workers finished {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:42:43.900Z INFO controller/controller.go:239 All workers finished {"controller": "local-agent-reconciler"} 2025-08-18T00:42:43.900Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:42:43.900389 25104 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1alpha1.ManagedClusterAddOn" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:43.900456 25104 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1alpha4.MulticlusterGlobalHub" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:42:43.900463 25104 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:42:43.900Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:42:43.900Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:42:43.900Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager Ran 10 of 10 Specs in 21.915 seconds SUCCESS! -- 10 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestControllers (21.92s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/agent 22.037s === RUN TestControllers Running Suite: Standalone Agent Controller Integration Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/agent/standalone_agent ================================================================================================================================================================================= Random Seed: 1755477747 Will run 1 of 1 specs 2025-08-18T00:42:35.633Z INFO controller/controller.go:175 Starting EventSource {"controller": "standalone-agent-reconciler", "source": "kind source: *v1alpha1.MulticlusterGlobalHubAgent"} 2025-08-18T00:42:35.633Z INFO controller/controller.go:175 Starting EventSource {"controller": "standalone-agent-reconciler", "source": "kind source: *v1.Deployment"} 2025-08-18T00:42:35.633Z INFO controller/controller.go:175 Starting EventSource {"controller": "standalone-agent-reconciler", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:42:35.633Z INFO controller/controller.go:175 Starting EventSource {"controller": "standalone-agent-reconciler", "source": "kind source: *v1.ServiceAccount"} 2025-08-18T00:42:35.633Z INFO controller/controller.go:175 Starting EventSource {"controller": "standalone-agent-reconciler", "source": "kind source: *v1.ClusterRole"} 2025-08-18T00:42:35.633Z INFO controller/controller.go:175 Starting EventSource {"controller": "standalone-agent-reconciler", "source": "kind source: *v1.ClusterRoleBinding"} 2025-08-18T00:42:35.633Z INFO controller/controller.go:183 Starting Controller {"controller": "standalone-agent-reconciler"} 2025-08-18T00:42:35.734Z INFO controller/controller.go:217 Starting workers {"controller": "standalone-agent-reconciler", "worker count": 1} •2025-08-18T00:42:38.808Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:42:38.808Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:42:38.808Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "standalone-agent-reconciler"} 2025-08-18T00:42:38.821Z ERROR controller/controller.go:316 Reconciler error {"controller": "standalone-agent-reconciler", "namespace": "", "name": "multicluster-global-hub:multicluster-global-hub-agent", "reconcileID": "3ae38900-07cc-4446-9123-f7ad8e3b00c4", "error": "failed to create/update standalone agent objects: Get \"https://127.0.0.1:37713/apis/apps/v1/namespaces/default/deployments/multicluster-global-hub-agent\": dial tcp 127.0.0.1:37713: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:42:38.821Z INFO controller/controller.go:239 All workers finished {"controller": "standalone-agent-reconciler"} 2025-08-18T00:42:38.821Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:42:38.821448 25289 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ServiceAccount" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:42:38.821Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:42:38.821Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:42:38.821Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager Ran 1 of 1 Specs in 12.292 seconds SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestControllers (12.29s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/agent/standalone_agent 12.872s === RUN TestControllers Running Suite: Controller Integration Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/webhook ===================================================================================================================================== Random Seed: 1755477747 Will run 2 of 2 specs 2025-08-18T00:42:33.897Z INFO controller-runtime.webhook webhook/server.go:183 Registering webhook {"path": "/mutating"} 2025-08-18T00:42:33.897Z INFO controller-runtime.webhook webhook/server.go:191 Starting webhook server 2025-08-18T00:42:33.897Z INFO controller-runtime.certwatcher certwatcher/certwatcher.go:161 Updated current TLS certificate 2025-08-18T00:42:33.898Z INFO controller-runtime.webhook webhook/server.go:242 Serving webhook server {"host": "127.0.0.1", "port": 40403} 2025-08-18T00:42:33.898Z INFO controller-runtime.certwatcher certwatcher/certwatcher.go:115 Starting certificate watcher 2025-08-18T00:42:35.921Z INFO webhook/admission_handler.go:124 The cluster mc1 with label global-hub.open-cluster-management.io/deploy-mode=hosted, importing the managed hub in hosted mode 2025-08-18T00:42:36.022Z INFO webhook/admission_handler.go:137 Add hosted annotation into managedcluster: mc1 •2025-08-18T00:42:36.041Z INFO webhook/admission_handler.go:64 handling klusterletaddonconfig for hosted cluster: mc1 2025-08-18T00:42:36.041Z INFO webhook/admission_handler.go:74 Disable addons in cluster :mc1 •2025-08-18T00:42:36.051Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:42:36.051Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:42:36.052Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:42:36.052268 25288 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ManagedCluster" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:42:36.052Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:42:36.052Z INFO controller-runtime.webhook webhook/server.go:249 Shutting down webhook server with timeout of 1 minute 2025-08-18T00:42:37.111Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:42:37.111Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager Ran 2 of 2 Specs in 9.839 seconds SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestControllers (9.84s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/operator/webhook 10.138s ? github.com/stolostron/multicluster-global-hub/test/integration/utils [no test files] ? github.com/stolostron/multicluster-global-hub/test/integration/utils/testpostgres [no test files] ? github.com/stolostron/multicluster-global-hub/test/integration/utils/testpostgres/cmd [no test files] FAIL make: *** [test/Makefile:44: integration-test] Error 1 {"component":"entrypoint","error":"wrapped process failed: exit status 2","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:84","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2025-08-18T00:43:41Z"} INFO[2025-08-18T00:43:43Z] Ran for 26m1s ERRO[2025-08-18T00:43:43Z] Some steps failed: ERRO[2025-08-18T00:43:43Z] * could not run steps: step test-integration failed: test "test-integration" failed: could not watch pod: the pod ci-op-yctml9n0/test-integration failed after 4m34s (failed containers: test): ContainerFailed one or more containers exited Container test exited with code 2, reason Error --- er :mc1 •2025-08-18T00:42:36.051Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:42:36.051Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:42:36.052Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:42:36.052268 25288 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ManagedCluster" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:42:36.052Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:42:36.052Z INFO controller-runtime.webhook webhook/server.go:249 Shutting down webhook server with timeout of 1 minute 2025-08-18T00:42:37.111Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:42:37.111Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager Ran 2 of 2 Specs in 9.839 seconds SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestControllers (9.84s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/operator/webhook 10.138s ? github.com/stolostron/multicluster-global-hub/test/integration/utils [no test files] ? github.com/stolostron/multicluster-global-hub/test/integration/utils/testpostgres [no test files] ? github.com/stolostron/multicluster-global-hub/test/integration/utils/testpostgres/cmd [no test files] FAIL make: *** [test/Makefile:44: integration-test] Error 1 {"component":"entrypoint","error":"wrapped process failed: exit status 2","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:84","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2025-08-18T00:43:41Z"} --- INFO[2025-08-18T00:43:43Z] Reporting job state 'failed' with reason 'executing_graph:step_failed:running_pod'