INFO[2025-08-18T00:20:03Z] ci-operator version v20250814-e1c78f45b INFO[2025-08-18T00:20:03Z] Loading configuration from https://config.ci.openshift.org for stolostron/multicluster-global-hub@main INFO[2025-08-18T00:20:03Z] Resolved source https://github.com/stolostron/multicluster-global-hub to main@94f583ec, merging: #1888 f9f90049 @dependabot[bot] INFO[2025-08-18T00:20:03Z] Loading information from https://config.ci.openshift.org for integrated stream ocp/4.18 INFO[2025-08-18T00:20:03Z] Loading information from https://config.ci.openshift.org for integrated stream ocp/4.18 INFO[2025-08-18T00:20:04Z] Building release initial from a snapshot of ocp/4.18 INFO[2025-08-18T00:20:04Z] Building release latest from a snapshot of ocp/4.18 INFO[2025-08-18T00:20:04Z] Using namespace https://console-openshift-console.apps.build11.ci.devcluster.openshift.com/k8s/cluster/projects/ci-op-7m89ydg2 INFO[2025-08-18T00:20:04Z] Setting arch for src arch=amd64 reasons=test-integration INFO[2025-08-18T00:20:04Z] Running [input:root], src, test-integration INFO[2025-08-18T00:20:04Z] Tagging stolostron/builder:go1.24-linux into pipeline:root. INFO[2025-08-18T00:20:04Z] Building src INFO[2025-08-18T00:20:04Z] Found existing build "src-amd64" INFO[2025-08-18T00:27:21Z] Build src-amd64 succeeded after 5m53s INFO[2025-08-18T00:27:21Z] Retrieving digests of member images INFO[2025-08-18T00:27:22Z] Image ci-op-7m89ydg2/pipeline:src created digest=sha256:7a0ca244017297db62d52e6b5c286aac86ec8c815e7c48c5baf1592804e3ddf4 for-build=src INFO[2025-08-18T00:27:22Z] Executing test test-integration INFO[2025-08-18T00:32:34Z] Logs for container test in pod test-integration: INFO[2025-08-18T00:32:34Z] GOBIN=/tmp/cr-tests-bin go install sigs.k8s.io/controller-runtime/tools/setup-envtest@release-0.20 go: downloading sigs.k8s.io/controller-runtime v0.20.5-0.20250517180713-32e5e9e948a5 go: downloading sigs.k8s.io/controller-runtime/tools/setup-envtest v0.0.0-20250517180713-32e5e9e948a5 go: downloading go.uber.org/zap v1.27.0 go: downloading github.com/go-logr/logr v1.4.2 go: downloading github.com/spf13/pflag v1.0.6 go: downloading github.com/spf13/afero v1.12.0 go: downloading github.com/go-logr/zapr v1.3.0 go: downloading sigs.k8s.io/yaml v1.4.0 go: downloading golang.org/x/text v0.21.0 go: downloading go.uber.org/multierr v1.10.0 KUBEBUILDER_ASSETS="/tmp/.local/share/kubebuilder-envtest/k8s/1.33.0-linux-amd64" go test -v `go list ./test/integration/...` go: downloading github.com/operator-framework/api v0.33.0 go: downloading k8s.io/apimachinery v0.33.2 go: downloading sigs.k8s.io/controller-runtime v0.19.1 go: downloading github.com/fergusstrange/embedded-postgres v1.31.0 go: downloading gorm.io/driver/postgres v1.6.0 go: downloading github.com/lib/pq v1.10.9 go: downloading github.com/jackc/pgx/v5 v5.7.5 go: downloading gorm.io/gorm v1.30.1 go: downloading k8s.io/client-go v0.33.2 go: downloading k8s.io/api v0.33.2 go: downloading github.com/go-logr/logr v1.4.3 go: downloading github.com/RedHatInsights/strimzi-client-go v0.40.0 go: downloading github.com/cloudevents/sdk-go/v2 v2.16.1 go: downloading github.com/deckarep/golang-set v1.8.0 go: downloading github.com/openshift/api v0.0.0-20250220103441-744790f2cff7 go: downloading github.com/stolostron/multiclusterhub-operator v0.0.0-20250415191038-1e368a726d8b go: downloading open-cluster-management.io/api v1.0.0 go: downloading open-cluster-management.io/governance-policy-propagator v0.16.0 go: downloading go.uber.org/multierr v1.11.0 go: downloading github.com/jinzhu/now v1.1.5 go: downloading github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8 go: downloading github.com/gogo/protobuf v1.3.2 go: downloading k8s.io/utils v0.0.0-20250604170112-4c0f3b243397 go: downloading sigs.k8s.io/randfill v1.0.0 go: downloading k8s.io/klog/v2 v2.130.1 go: downloading sigs.k8s.io/structured-merge-diff/v4 v4.7.0 go: downloading k8s.io/apiextensions-apiserver v0.33.2 go: downloading github.com/jackc/puddle/v2 v2.2.2 go: downloading github.com/jackc/pgpassfile v1.0.0 go: downloading github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 go: downloading golang.org/x/crypto v0.41.0 go: downloading golang.org/x/text v0.28.0 go: downloading k8s.io/klog v1.0.0 go: downloading github.com/jinzhu/inflection v1.0.0 go: downloading github.com/sirupsen/logrus v1.9.3 go: downloading github.com/evanphx/json-patch/v5 v5.9.11 go: downloading github.com/evanphx/json-patch v5.9.11+incompatible go: downloading gopkg.in/inf.v0 v0.9.1 go: downloading sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 go: downloading github.com/google/uuid v1.6.0 go: downloading github.com/json-iterator/go v1.1.12 go: downloading golang.org/x/sync v0.16.0 go: downloading github.com/blang/semver/v4 v4.0.0 go: downloading sigs.k8s.io/yaml v1.5.0 go: downloading golang.org/x/sys v0.35.0 go: downloading k8s.io/kube-openapi v0.0.0-20250610211856-8b98d1ed966a go: downloading github.com/fxamacker/cbor/v2 v2.8.0 go: downloading golang.org/x/net v0.43.0 go: downloading gomodules.xyz/jsonpatch/v2 v2.4.0 go: downloading github.com/x448/float16 v0.8.4 go: downloading github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd go: downloading github.com/modern-go/reflect2 v1.0.2 go: downloading golang.org/x/exp v0.0.0-20250620022241-b7579e27df2b go: downloading github.com/prometheus/client_golang v1.22.0 go: downloading github.com/google/gnostic-models v0.6.9 go: downloading github.com/fsnotify/fsnotify v1.8.0 go: downloading google.golang.org/protobuf v1.36.6 go: downloading go.yaml.in/yaml/v2 v2.4.2 go: downloading github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 go: downloading github.com/spf13/pflag v1.0.7 go: downloading golang.org/x/term v0.34.0 go: downloading golang.org/x/time v0.12.0 go: downloading github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc go: downloading github.com/google/go-cmp v0.7.0 go: downloading golang.org/x/oauth2 v0.30.0 go: downloading gopkg.in/yaml.v3 v3.0.1 go: downloading gopkg.in/evanphx/json-patch.v4 v4.12.0 go: downloading github.com/go-openapi/jsonreference v0.21.0 go: downloading github.com/go-openapi/swag v0.23.1 go: downloading github.com/emicklei/go-restful/v3 v3.12.1 go: downloading github.com/prometheus/client_model v0.6.2 go: downloading github.com/prometheus/common v0.65.0 go: downloading github.com/beorn7/perks v1.0.1 go: downloading github.com/cespare/xxhash/v2 v2.3.0 go: downloading github.com/prometheus/procfs v0.16.1 go: downloading github.com/go-openapi/jsonpointer v0.21.1 go: downloading github.com/mailru/easyjson v0.9.0 go: downloading github.com/pkg/errors v0.9.1 go: downloading github.com/josharian/intern v1.0.0 go: downloading github.com/onsi/ginkgo/v2 v2.23.4 go: downloading github.com/onsi/gomega v1.38.0 go: downloading github.com/go-co-op/gocron v1.37.0 go: downloading github.com/stolostron/klusterlet-addon-controller v0.0.0-20250224012200-769f091c0e95 go: downloading open-cluster-management.io/multicloud-operators-subscription v0.16.0 go: downloading github.com/authzed/spicedb-operator v1.20.1 go: downloading github.com/crunchydata/postgres-operator v1.3.3-0.20230629151007-94ebcf2df74d go: downloading github.com/cloudflare/cfssl v1.6.5 go: downloading github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.76.0 go: downloading github.com/go-kratos/kratos/v2 v2.8.4 go: downloading github.com/project-kessel/inventory-api v0.0.0-20241213103024-feb181fd66c1 go: downloading github.com/project-kessel/inventory-client-go v0.0.0-20240927104800-2c124202b25f go: downloading github.com/gin-gonic/gin v1.10.1 go: downloading open-cluster-management.io/multicloud-operators-channel v0.16.0 go: downloading sigs.k8s.io/application v0.8.3 go: downloading github.com/stolostron/cluster-lifecycle-api v0.0.0-20250429012240-363012f4f827 go: downloading k8s.io/kube-aggregator v0.32.6 go: downloading sigs.k8s.io/kustomize/kyaml v0.20.0 go: downloading github.com/cloudevents/sdk-go/protocol/kafka_confluent/v2 v2.0.0-20250811193955-d8449ff1e35a go: downloading github.com/confluentinc/confluent-kafka-go/v2 v2.11.0 go: downloading open-cluster-management.io/managed-serviceaccount v0.8.0 go: downloading github.com/IBM/sarama v1.45.2 go: downloading gorm.io/datatypes v1.2.6 go: downloading github.com/openshift/client-go v0.0.0-20250131180035-f7ec47e2d87a go: downloading open-cluster-management.io/addon-framework v0.12.1-0.20250422083707-fb6b4ebb66b5 go: downloading gopkg.in/ini.v1 v1.67.0 go: downloading gopkg.in/yaml.v2 v2.4.0 go: downloading github.com/robfig/cron/v3 v3.0.1 go: downloading go.uber.org/atomic v1.11.0 go: downloading github.com/authzed/grpcutil v0.0.0-20240123194739-2ea1e3d2d98b go: downloading github.com/golang-jwt/jwt/v5 v5.2.2 go: downloading github.com/patrickmn/go-cache v2.1.0+incompatible go: downloading google.golang.org/grpc v1.73.0 go: downloading github.com/gin-contrib/sse v1.1.0 go: downloading github.com/mattn/go-isatty v0.0.20 go: downloading gorm.io/driver/mysql v1.5.6 go: downloading github.com/openshift/library-go v0.0.0-20250228164547-bad2d1bf3a37 go: downloading buf.build/gen/go/bufbuild/protovalidate/protocolbuffers/go v1.35.2-20240920164238-5a7b106cbb87.1 go: downloading google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822 go: downloading github.com/go-kratos/aegis v0.2.0 go: downloading github.com/gorilla/mux v1.8.1 go: downloading github.com/certifi/gocertifi v0.0.0-20210507211836-431795d63e8d go: downloading github.com/grpc-ecosystem/go-grpc-middleware v1.4.0 go: downloading github.com/stretchr/testify v1.10.0 go: downloading github.com/go-playground/validator/v10 v10.26.0 go: downloading github.com/pelletier/go-toml/v2 v2.2.4 go: downloading github.com/ugorji/go/codec v1.2.12 go: downloading github.com/pelletier/go-toml v1.9.5 go: downloading open-cluster-management.io/sdk-go v0.16.0 go: downloading github.com/fatih/structs v1.1.0 go: downloading helm.sh/helm/v3 v3.18.4 go: downloading go.yaml.in/yaml/v3 v3.0.3 go: downloading github.com/go-sql-driver/mysql v1.8.1 go: downloading github.com/eapache/go-resiliency v1.7.0 go: downloading github.com/eapache/go-xerial-snappy v0.0.0-20230731223053-c322873962e3 go: downloading github.com/hashicorp/go-multierror v1.1.1 go: downloading github.com/jcmturner/gofork v1.7.6 go: downloading github.com/eapache/queue v1.1.0 go: downloading github.com/jcmturner/gokrb5/v8 v8.4.4 go: downloading github.com/klauspost/compress v1.18.0 go: downloading github.com/pierrec/lz4/v4 v4.1.22 go: downloading github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 go: downloading github.com/stolostron/multicloud-operators-foundation v0.0.0-20241223014534-09421f48bba2 go: downloading k8s.io/apiserver v0.33.2 go: downloading google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 go: downloading github.com/cenkalti/backoff/v4 v4.3.0 go: downloading github.com/zmap/zlint/v3 v3.5.0 go: downloading github.com/google/certificate-transparency-go v1.1.7 go: downloading github.com/zmap/zcrypto v0.0.0-20230310154051-c8b263fd8300 go: downloading github.com/gabriel-vasile/mimetype v1.4.9 go: downloading github.com/go-playground/universal-translator v0.18.1 go: downloading github.com/leodido/go-urn v1.4.0 go: downloading github.com/go-errors/errors v1.5.1 go: downloading github.com/go-playground/form/v4 v4.2.1 go: downloading filippo.io/edwards25519 v1.1.0 go: downloading github.com/hashicorp/errwrap v1.1.0 go: downloading github.com/golang/snappy v0.0.4 go: downloading github.com/jcmturner/dnsutils/v2 v2.0.0 go: downloading github.com/hashicorp/go-uuid v1.0.3 go: downloading github.com/jmoiron/sqlx v1.4.0 go: downloading github.com/Masterminds/semver/v3 v3.3.1 go: downloading github.com/cyphar/filepath-securejoin v0.4.1 go: downloading github.com/mitchellh/copystructure v1.2.0 go: downloading github.com/xeipuuv/gojsonschema v1.2.0 go: downloading github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 go: downloading github.com/BurntSushi/toml v1.5.0 go: downloading github.com/Masterminds/sprig/v3 v3.3.0 go: downloading github.com/gobwas/glob v0.2.3 go: downloading github.com/go-playground/locales v0.14.1 go: downloading sigs.k8s.io/kube-storage-version-migrator v0.0.6-0.20230721195810-5c8923c5ff96 go: downloading github.com/jcmturner/rpc/v2 v2.0.3 go: downloading github.com/mitchellh/reflectwalk v1.0.2 go: downloading github.com/jcmturner/aescts/v2 v2.0.0 go: downloading github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 go: downloading dario.cat/mergo v1.0.1 go: downloading github.com/Masterminds/goutils v1.1.1 go: downloading github.com/huandu/xstrings v1.5.0 go: downloading github.com/shopspring/decimal v1.4.0 go: downloading github.com/spf13/cast v1.7.0 go: downloading github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb go: downloading github.com/weppos/publicsuffix-go v0.30.0 go: downloading k8s.io/component-base v0.33.2 go: downloading go.opentelemetry.io/otel/trace v1.36.0 go: downloading go.opentelemetry.io/otel v1.36.0 === RUN TestIntegration Running Suite: Controller Integration Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/agent/controller ===================================================================================================================================== Random Seed: 1755477043 Will run 5 of 5 specs 2025-08-18T00:30:48.322Z INFO controller/controller.go:175 Starting EventSource {"controller": "hubclusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "source": "kind source: *v1alpha1.ClusterClaim"} 2025-08-18T00:30:48.322Z INFO controller/controller.go:183 Starting Controller {"controller": "hubclusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim"} 2025-08-18T00:30:48.322Z INFO controller/controller.go:175 Starting EventSource {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "source": "kind source: *v1alpha1.ClusterClaim"} 2025-08-18T00:30:48.322Z INFO controller/controller.go:175 Starting EventSource {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "source": "kind source: *v1.MultiClusterHub"} 2025-08-18T00:30:48.322Z INFO controller/controller.go:183 Starting Controller {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim"} 2025-08-18T00:30:48.424Z INFO controller/controller.go:217 Starting workers {"controller": "hubclusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "worker count": 1} 2025-08-18T00:30:48.424Z INFO controller/controller.go:217 Starting workers {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "worker count": 1} 2025-08-18T00:30:50.502Z INFO controllers/clusterclaim_hub_controller.go:33 NamespacedName: /test2 2025-08-18T00:30:50.607Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "74b08260-74ec-4931-81fa-6a9e6e5ec600", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc000cd4960}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc000cd4960}, {{{0x0, 0x0}, {0xc001b9d300, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc000cd48d0?, {0x29f75c0?, 0xc000cd4960?}, {{{0x0?, 0x0?}, {0xc001b9d300?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc001b9d300, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:50.607Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "74b08260-74ec-4931-81fa-6a9e6e5ec600", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:50.614Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "ee615e85-2e7a-483a-8913-98b4cf2c1292", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc001414ed0}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc001414ed0}, {{{0x0, 0x0}, {0xc001b9d300, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001414c90?, {0x29f75c0?, 0xc001414ed0?}, {{{0x0?, 0x0?}, {0xc001b9d300?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc001b9d300, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:50.614Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "ee615e85-2e7a-483a-8913-98b4cf2c1292", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:50.627Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "477e62e3-45f8-4e46-9b54-c547fe01ae28", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc001d83950}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc001d83950}, {{{0x0, 0x0}, {0xc001b9d300, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001d83890?, {0x29f75c0?, 0xc001d83950?}, {{{0x0?, 0x0?}, {0xc001b9d300?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc001b9d300, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:50.627Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "477e62e3-45f8-4e46-9b54-c547fe01ae28", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:50.649Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "fceb25a9-7320-45f0-850c-17fb0b770e5a", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc001415c20}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc001415c20}, {{{0x0, 0x0}, {0xc001b9d300, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001415b30?, {0x29f75c0?, 0xc001415c20?}, {{{0x0?, 0x0?}, {0xc001b9d300?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc001b9d300, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:50.649Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "fceb25a9-7320-45f0-850c-17fb0b770e5a", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:50.691Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "eacb29f7-a3b0-4ad8-83c7-d7a6b9d4ab36", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc001415ec0}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc001415ec0}, {{{0x0, 0x0}, {0xc001b9d300, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001415e00?, {0x29f75c0?, 0xc001415ec0?}, {{{0x0?, 0x0?}, {0xc001b9d300?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc001b9d300, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:50.691Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "eacb29f7-a3b0-4ad8-83c7-d7a6b9d4ab36", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:30:50.713Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "2842dc9a-6f10-4f08-a8ca-97cee4f6fb46", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc000cd5920}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc000cd5920}, {{{0x0, 0x0}, {0xc0008b0700, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc000cd5740?, {0x29f75c0?, 0xc000cd5920?}, {{{0x0?, 0x0?}, {0xc0008b0700?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc0008b0700, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:50.713Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "2842dc9a-6f10-4f08-a8ca-97cee4f6fb46", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:30:50.717Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "3afbde5b-0583-4fd1-86a4-102c08f00b5a", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc00162f470}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc00162f470}, {{{0x0, 0x0}, {0xc0008b0700, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc00162f3e0?, {0x29f75c0?, 0xc00162f470?}, {{{0x0?, 0x0?}, {0xc0008b0700?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc0008b0700, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:50.717Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "3afbde5b-0583-4fd1-86a4-102c08f00b5a", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:50.736Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "54bbaed2-f19a-4889-971d-afcde87ac608", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc0008c47b0}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc0008c47b0}, {{{0x0, 0x0}, {0xc000bfafa0, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc0008c4720?, {0x29f75c0?, 0xc0008c47b0?}, {{{0x0?, 0x0?}, {0xc000bfafa0?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc000bfafa0, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:50.736Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "54bbaed2-f19a-4889-971d-afcde87ac608", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:50.738Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "b7b2c87d-c04b-4ed5-b7c9-8f12766bc128", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc001c47290}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc001c47290}, {{{0x0, 0x0}, {0xc000bfafa0, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001c47200?, {0x29f75c0?, 0xc001c47290?}, {{{0x0?, 0x0?}, {0xc000bfafa0?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc000bfafa0, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:50.738Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "b7b2c87d-c04b-4ed5-b7c9-8f12766bc128", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.156Z ERROR controllers/clusterclaim_version_controller.go:35 Operation cannot be fulfilled on clusterclaims.cluster.open-cluster-management.io "hub.open-cluster-management.io": StorageError: invalid object, Code: 4, Key: /registry/cluster.open-cluster-management.io/clusterclaims/hub.open-cluster-management.io, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 49edbbf0-e687-408e-a46e-e8dfb3b3f2cb, UID in object meta: failed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.156Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "2d9a5228-dd71-403c-b28d-dcec53a68146", "error": "Operation cannot be fulfilled on clusterclaims.cluster.open-cluster-management.io \"hub.open-cluster-management.io\": StorageError: invalid object, Code: 4, Key: /registry/cluster.open-cluster-management.io/clusterclaims/hub.open-cluster-management.io, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 49edbbf0-e687-408e-a46e-e8dfb3b3f2cb, UID in object meta: "} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:30:51.157Z INFO controllers/clusterclaim_hub_controller.go:33 NamespacedName: /version.open-cluster-management.io 2025-08-18T00:30:51.157Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "541e12a3-a68a-4056-b4e6-81ad103b4821", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc001d144e0}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc001d144e0}, {{{0x0, 0x0}, {0xc001b9df80, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001d14450?, {0x29f75c0?, 0xc001d144e0?}, {{{0x0?, 0x0?}, {0xc001b9df80?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc001b9df80, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.157Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "541e12a3-a68a-4056-b4e6-81ad103b4821", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:30:51.160Z ERROR controller/controller.go:316 Reconciler error {"controller": "hubclusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "1061b7f5-4c80-4cd1-84cb-95aa9b1b38e6", "error": "clusterclaims.cluster.open-cluster-management.io \"hub.open-cluster-management.io\" already exists"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.160Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "b0d0b39a-32d4-43b6-afa6-b1f2df48958b", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc001248210}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc001248210}, {{{0x0, 0x0}, {0xc000ebe480, 0x22}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001248180?, {0x29f75c0?, 0xc001248210?}, {{{0x0?, 0x0?}, {0xc000ebe480?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc000ebe480, 0x22}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.160Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "b0d0b39a-32d4-43b6-afa6-b1f2df48958b", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.162Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "573ad5e8-8062-4396-81ea-3bfe3b2fe26c", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc001052330}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc001052330}, {{{0x0, 0x0}, {0xc001b9df80, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc0010522a0?, {0x29f75c0?, 0xc001052330?}, {{{0x0?, 0x0?}, {0xc001b9df80?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc001b9df80, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.162Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "573ad5e8-8062-4396-81ea-3bfe3b2fe26c", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.163Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "71404519-f3f3-4ed6-9bd5-24cd506626f4", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc0010529f0}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc0010529f0}, {{{0xc001316320, 0x7}, {0xc001316310, 0xf}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001052960?, {0x29f75c0?, 0xc0010529f0?}, {{{0xc001316320?, 0x0?}, {0xc001316310?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0xc001316320, 0x7}, {0xc001316310, 0xf}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.163Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "71404519-f3f3-4ed6-9bd5-24cd506626f4", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.165Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "bb875c35-6ead-432f-8d6c-e55772b3c259", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc001052f90}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc001052f90}, {{{0x0, 0x0}, {0xc001b9d300, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001052f00?, {0x29f75c0?, 0xc001052f90?}, {{{0x0?, 0x0?}, {0xc001b9d300?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc001b9d300, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.165Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "bb875c35-6ead-432f-8d6c-e55772b3c259", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.165Z INFO controllers/clusterclaim_hub_controller.go:33 NamespacedName: /version.open-cluster-management.io 2025-08-18T00:30:51.167Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "1a5ada52-5459-457b-adf8-a728948e5e10", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc001053b00}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc001053b00}, {{{0x0, 0x0}, {0xc000ebe480, 0x22}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001053a70?, {0x29f75c0?, 0xc001053b00?}, {{{0x0?, 0x0?}, {0xc000ebe480?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc000ebe480, 0x22}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.167Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "1a5ada52-5459-457b-adf8-a728948e5e10", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.175Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "393064c8-3bb1-4f61-84df-aea22dd42fa4", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc001248510}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc001248510}, {{{0xc001316320, 0x7}, {0xc001316310, 0xf}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001248480?, {0x29f75c0?, 0xc001248510?}, {{{0xc001316320?, 0x0?}, {0xc001316310?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0xc001316320, 0x7}, {0xc001316310, 0xf}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.175Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "393064c8-3bb1-4f61-84df-aea22dd42fa4", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.179Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "7dfcfd35-ded2-4521-9bb4-16a2f282860b", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc001498450}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc001498450}, {{{0x0, 0x0}, {0xc000ebe480, 0x22}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc0014983c0?, {0x29f75c0?, 0xc001498450?}, {{{0x0?, 0x0?}, {0xc000ebe480?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc000ebe480, 0x22}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.179Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "7dfcfd35-ded2-4521-9bb4-16a2f282860b", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.187Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "4e7ddef0-97e7-4278-9f61-16a761c094be", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc001498b10}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc001498b10}, {{{0x0, 0x0}, {0xc001b9d300, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001498a80?, {0x29f75c0?, 0xc001498b10?}, {{{0x0?, 0x0?}, {0xc001b9d300?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc001b9d300, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.187Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "4e7ddef0-97e7-4278-9f61-16a761c094be", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.198Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "4e08d52b-8e6f-48a8-af8a-8ac738c67537", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc000f01920}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc000f01920}, {{{0xc001316320, 0x7}, {0xc001316310, 0xf}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc000f01890?, {0x29f75c0?, 0xc000f01920?}, {{{0xc001316320?, 0x0?}, {0xc001316310?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0xc001316320, 0x7}, {0xc001316310, 0xf}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.199Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "4e08d52b-8e6f-48a8-af8a-8ac738c67537", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.201Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "f165cb43-3b4c-43d2-b25b-52b086354a41", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc001248cc0}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc001248cc0}, {{{0x0, 0x0}, {0xc000ebe480, 0x22}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001248c30?, {0x29f75c0?, 0xc001248cc0?}, {{{0x0?, 0x0?}, {0xc000ebe480?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc000ebe480, 0x22}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.202Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "f165cb43-3b4c-43d2-b25b-52b086354a41", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.230Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "0bd2ef54-c7ef-4122-b794-2dac370e5ffc", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc001499590}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc001499590}, {{{0x0, 0x0}, {0xc001b9d300, 0x1e}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001499500?, {0x29f75c0?, 0xc001499590?}, {{{0x0?, 0x0?}, {0xc001b9d300?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc001b9d300, 0x1e}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.230Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "0bd2ef54-c7ef-4122-b794-2dac370e5ffc", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.242Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "eba2b07b-eb26-4d3c-9e0d-37f0fceb0f54", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc000f9fa40}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc000f9fa40}, {{{0xc001316320, 0x7}, {0xc001316310, 0xf}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc000f9f9b0?, {0x29f75c0?, 0xc000f9fa40?}, {{{0xc001316320?, 0x0?}, {0xc001316310?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0xc001316320, 0x7}, {0xc001316310, 0xf}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.242Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "eba2b07b-eb26-4d3c-9e0d-37f0fceb0f54", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.244Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "d49f9fe0-3345-4bd8-84a6-9900f8143acd", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 380 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x29f75c0, 0xc000f01f20}, {0x21e0ea0, 0x3eb8bc0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x21e0ea0?, 0x3eb8bc0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile(0xc000dc7360, {0x29f75c0, 0xc000f01f20}, {{{0x0, 0x0}, {0xc000ebe480, 0x22}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 +0x228\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc000f01e90?, {0x29f75c0?, 0xc000f01f20?}, {{{0x0?, 0x0?}, {0xc000ebe480?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x2a0aba0, {0x29f75f8, 0xc0007b95e0}, {{{0x0, 0x0}, {0xc000ebe480, 0x22}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x2a0aba0, {0x29f75f8, 0xc0007b95e0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 338\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:44 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.244Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "d49f9fe0-3345-4bd8-84a6-9900f8143acd", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:30:51.312Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.312Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "79a77032-b052-499e-84ca-81497a25247c", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.323Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.323Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "0e99e80f-d10f-4ded-923f-d480034219a1", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.324Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.325Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "31913b55-7a0e-4424-8b6b-52c036ba8663", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.472Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.472Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "0653f479-e45b-4114-b222-9cf2aa77b905", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.484Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.484Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "57412313-5f94-4a0e-88bb-981fc15247fa", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.485Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.485Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "c516a0d1-8f9c-4ee1-a192-b19690ea5faf", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.793Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.793Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "3d358e75-449a-4d98-97fc-250b7cd20502", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.805Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.805Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "18546f63-67b7-4c2f-99ac-ecca28366d91", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.806Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:51.806Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "1b9c7ec7-7657-4f83-acad-e76b7ff002e8", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:52.435Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:52.435Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "3cc17e63-acb3-46e4-9f1d-c74fe38a6e69", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:52.446Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:52.446Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "d44132d9-df7c-438d-98b4-de4fd38bc441", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:52.447Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:52.447Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "a83e59a9-1df7-4a94-8d86-768ce175b552", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:53.716Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:53.716Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "e9ffe399-8874-4c78-a8d4-dc9b1ce8b165", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:53.726Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:53.726Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "fbe9b340-874e-49c3-a6c3-9bfddb76e6ab", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:53.728Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:53.728Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "26122796-90e7-4dc7-a510-b572fd015890", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:56.277Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:56.277Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "1c8ce55a-9e90-446f-a816-3b091b2c10a3", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:56.288Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:56.288Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "a1d87a93-d333-4326-b55d-e1978372fc5d", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:56.293Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:30:56.293Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "65387a6e-466e-437f-a22f-b54375c26c7f", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:01.399Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:01.399Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "e9c586a4-649b-404b-98c5-5dcf49dc2ce7", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:01.409Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:01.409Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "fc688ab4-5b23-429b-9f88-ccfedbc7e43b", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:01.414Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:01.414Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "0968ca27-2d21-4bf5-9c03-436f9500594a", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:11.640Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:11.640Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"hub.open-cluster-management.io"}, "namespace": "", "name": "hub.open-cluster-management.io", "reconcileID": "2bd0899c-7d8a-4b1d-a4e6-b06198fe73b6", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:11.650Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:11.650Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"multiclusterhub","namespace":"default"}, "namespace": "default", "name": "multiclusterhub", "reconcileID": "c369c636-b8bf-406b-8f5a-da51e7db84db", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:11.656Z ERROR controllers/clusterclaim_version_controller.go:35 Put "https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io": dial tcp 127.0.0.1:34613: connect: connection refusedfailed to update Hub clusterClaim github.com/stolostron/multicluster-global-hub/agent/pkg/controllers.(*versionClusterClaimController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/controllers/clusterclaim_version_controller.go:35 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:11.656Z ERROR controller/controller.go:316 Reconciler error {"controller": "clusterclaim-controller", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ClusterClaim", "ClusterClaim": {"name":"version.open-cluster-management.io"}, "namespace": "", "name": "version.open-cluster-management.io", "reconcileID": "f07cf5cf-fb2d-4f24-925e-6cf541bca9bc", "error": "Put \"https://127.0.0.1:34613/apis/cluster.open-cluster-management.io/v1alpha1/clusterclaims/hub.open-cluster-management.io\": dial tcp 127.0.0.1:34613: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 Ran 5 of 5 Specs in 31.972 seconds SUCCESS! -- 5 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestIntegration (31.97s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/agent/controller 32.022s === RUN TestMigration Running Suite: Agent Migration Integration Test Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/agent/migration ============================================================================================================================================== Random Seed: 1755477050 Will run 17 of 17 specs 2025-08-18T00:30:56.389Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:30:58.429Z INFO syncers/migration_from_syncer.go:131 migration Initializing started: migrationId=test-migration-123, clusters=[test-cluster-1] 2025-08-18T00:30:58.530Z INFO syncers/migration_from_syncer.go:218 bootstrap secret bootstrap-hub2 is unchanged 2025-08-18T00:30:58.748Z INFO syncers/migration_from_syncer.go:270 managed clusters test-cluster-1 is updated 2025-08-18T00:30:58.748Z INFO syncers/migration_from_syncer.go:140 migration Initializing completed: migrationId=test-migration-123 •2025-08-18T00:30:58.750Z INFO syncers/migration_from_syncer.go:131 migration Deploying started: migrationId=test-migration-123, clusters=[test-cluster-1] 2025-08-18T00:30:58.851Z INFO syncers/migration_from_syncer.go:189 deploying: attach clusters and addonConfigs into the event 2025-08-18T00:30:58.851Z INFO syncers/migration_from_syncer.go:140 migration Deploying completed: migrationId=test-migration-123 •2025-08-18T00:30:58.951Z INFO syncers/migration_from_syncer.go:131 migration Registering started: migrationId=test-migration-123, clusters=[test-cluster-1] 2025-08-18T00:30:58.951Z INFO syncers/migration_from_syncer.go:340 updating managedcluster test-cluster-1 to set HubAcceptsClient as false 2025-08-18T00:30:58.956Z INFO syncers/migration_from_syncer.go:140 migration Registering completed: migrationId=test-migration-123 •2025-08-18T00:30:58.956Z INFO syncers/migration_from_syncer.go:131 migration Cleaning started: migrationId=test-migration-123, clusters=[test-cluster-1] 2025-08-18T00:30:58.956Z INFO syncers/migration_to_syncer.go:831 deleting resource multicluster-global-hub/bootstrap-hub2 2025-08-18T00:30:59.060Z INFO syncers/migration_to_syncer.go:831 deleting resource /migration-hub2 2025-08-18T00:30:59.065Z INFO syncers/migration_from_syncer.go:161 cleaning up 1 managed clusters 2025-08-18T00:30:59.068Z INFO syncers/migration_from_syncer.go:679 deleted managed cluster test-cluster-1 2025-08-18T00:30:59.068Z INFO syncers/migration_from_syncer.go:140 migration Cleaning completed: migrationId=test-migration-123 •2025-08-18T00:31:00.079Z INFO syncers/migration_from_syncer.go:131 migration Rollbacking started: migrationId=test-migration-123, clusters=[test-cluster-1] 2025-08-18T00:31:00.079Z INFO syncers/migration_from_syncer.go:464 performing rollback for stage: Initializing 2025-08-18T00:31:00.079Z INFO syncers/migration_from_syncer.go:485 cleaning up bootstrap secret: test 2025-08-18T00:31:00.079Z INFO syncers/migration_from_syncer.go:489 successfully deleted bootstrap secret: test 2025-08-18T00:31:00.079Z INFO syncers/migration_from_syncer.go:498 cleaning up KlusterletConfig: migration-hub2 2025-08-18T00:31:00.079Z INFO syncers/migration_from_syncer.go:502 successfully deleted KlusterletConfig: migration-hub2 2025-08-18T00:31:00.079Z INFO syncers/migration_from_syncer.go:507 cleaning up annotations for managed cluster: test-cluster-1 2025-08-18T00:31:00.082Z INFO syncers/migration_from_syncer.go:553 successfully removed migration annotations from managed cluster: test-cluster-1 2025-08-18T00:31:00.082Z INFO syncers/migration_from_syncer.go:140 migration Rollbacking completed: migrationId=test-migration-123 •2025-08-18T00:31:01.188Z INFO syncers/migration_from_syncer.go:131 migration Rollbacking started: migrationId=test-migration-123, clusters=[test-cluster-1] 2025-08-18T00:31:01.189Z INFO syncers/migration_from_syncer.go:464 performing rollback for stage: Deploying 2025-08-18T00:31:01.189Z INFO syncers/migration_from_syncer.go:567 rollback deploying stage for clusters: [test-cluster-1] 2025-08-18T00:31:01.189Z INFO syncers/migration_from_syncer.go:498 cleaning up KlusterletConfig: migration-hub2 2025-08-18T00:31:01.189Z INFO syncers/migration_from_syncer.go:502 successfully deleted KlusterletConfig: migration-hub2 2025-08-18T00:31:01.189Z INFO syncers/migration_from_syncer.go:507 cleaning up annotations for managed cluster: test-cluster-1 2025-08-18T00:31:01.196Z INFO syncers/migration_from_syncer.go:553 successfully removed migration annotations from managed cluster: test-cluster-1 2025-08-18T00:31:01.196Z INFO syncers/migration_from_syncer.go:579 completed deploying stage rollback 2025-08-18T00:31:01.196Z INFO syncers/migration_from_syncer.go:140 migration Rollbacking completed: migrationId=test-migration-123 •2025-08-18T00:31:02.214Z INFO syncers/migration_from_syncer.go:131 migration Rollbacking started: migrationId=test-migration-123, clusters=[test-cluster-1] 2025-08-18T00:31:02.214Z INFO syncers/migration_from_syncer.go:464 performing rollback for stage: Registering 2025-08-18T00:31:02.214Z INFO syncers/migration_from_syncer.go:585 rollback registering stage for clusters: [test-cluster-1] 2025-08-18T00:31:02.214Z INFO syncers/migration_from_syncer.go:567 rollback deploying stage for clusters: [test-cluster-1] 2025-08-18T00:31:02.214Z INFO syncers/migration_from_syncer.go:498 cleaning up KlusterletConfig: migration-hub2 2025-08-18T00:31:02.214Z INFO syncers/migration_from_syncer.go:502 successfully deleted KlusterletConfig: migration-hub2 2025-08-18T00:31:02.214Z INFO syncers/migration_from_syncer.go:507 cleaning up annotations for managed cluster: test-cluster-1 2025-08-18T00:31:02.221Z INFO syncers/migration_from_syncer.go:553 successfully removed migration annotations from managed cluster: test-cluster-1 2025-08-18T00:31:02.221Z INFO syncers/migration_from_syncer.go:579 completed deploying stage rollback 2025-08-18T00:31:02.224Z INFO syncers/migration_from_syncer.go:140 migration Rollbacking completed: migrationId=test-migration-123 •2025-08-18T00:31:02.225Z INFO syncers/migration_from_syncer.go:131 migration Initializing started: migrationId=error-test-1, clusters=[test-cluster-1] 2025-08-18T00:31:02.225Z ERROR syncers/migration_from_syncer.go:135 migration Initializing failed: migrationId=error-test-1, error=bootstrap secret is nil when initializing github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers.(*MigrationSourceSyncer).executeStage /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers/migration_from_syncer.go:135 github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers.(*MigrationSourceSyncer).handleStage /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers/migration_from_syncer.go:112 github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers.(*MigrationSourceSyncer).Sync /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers/migration_from_syncer.go:88 github.com/stolostron/multicluster-global-hub/test/integration/agent/migration_test.init.func1.5.1 /go/src/github.com/stolostron/multicluster-global-hub/test/integration/agent/migration/migration_from_syncer_test.go:463 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/node.go:475 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/suite.go:894 •2025-08-18T00:31:02.229Z INFO syncers/migration_from_syncer.go:131 migration Initializing started: migrationId=error-test-2, clusters=[non-existent-cluster] 2025-08-18T00:31:02.232Z INFO syncers/migration_from_syncer.go:218 bootstrap secret bootstrap-hub2-test2 is unchanged 2025-08-18T00:31:02.232Z ERROR syncers/migration_from_syncer.go:135 migration Initializing failed: migrationId=error-test-2, error=failed to create/update bootstrap secret: secrets "bootstrap-hub2-test2" already exists github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers.(*MigrationSourceSyncer).executeStage /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers/migration_from_syncer.go:135 github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers.(*MigrationSourceSyncer).handleStage /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers/migration_from_syncer.go:112 github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers.(*MigrationSourceSyncer).Sync /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/spec/syncers/migration_from_syncer.go:88 github.com/stolostron/multicluster-global-hub/test/integration/agent/migration_test.init.func1.5.2 /go/src/github.com/stolostron/multicluster-global-hub/test/integration/agent/migration/migration_from_syncer_test.go:508 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/node.go:475 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/suite.go:894 ------------------------------ • [FAILED] [0.024 seconds] MigrationFromSyncer Error handling scenarios [It] should handle missing managed cluster during deployment /go/src/github.com/stolostron/multicluster-global-hub/test/integration/agent/migration/migration_from_syncer_test.go:468 Timeline >> STEP: Creating migration event for non-existent cluster @ 08/18/25 00:31:02.225 STEP: Creating bootstrap secret for test @ 08/18/25 00:31:02.225 STEP: Preparing clean bootstrap secret for event @ 08/18/25 00:31:02.229 STEP: Processing event and expecting failure for non-existent cluster @ 08/18/25 00:31:02.229 [FAILED] in [It] - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/agent/migration/migration_from_syncer_test.go:510 @ 08/18/25 00:31:02.235 << Timeline [FAILED] Expected : failed to handle migration stage: failed to create/update bootstrap secret: secrets "bootstrap-hub2-test2" already exists to contain substring : "non-existent-cluster" not found In [It] at: /go/src/github.com/stolostron/multicluster-global-hub/test/integration/agent/migration/migration_from_syncer_test.go:510 @ 08/18/25 00:31:02.235 ------------------------------ S2025-08-18T00:31:02.264Z INFO syncers/migration_to_syncer.go:69 received migration event from global-hub 2025-08-18T00:31:02.265Z INFO syncers/migration_to_syncer.go:163 migration Initializing started: migrationId=test-migration-456, clusters=[] 2025-08-18T00:31:02.370Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.registrationConfiguration.autoApproveUsers" 2025-08-18T00:31:02.472Z INFO syncers/migration_to_syncer.go:425 creating migration clusterrole 2025-08-18T00:31:02.580Z INFO syncers/migration_to_syncer.go:541 creating subjectaccessreviews clusterrolebinding 2025-08-18T00:31:02.583Z INFO syncers/migration_to_syncer.go:483 creating agent registration clusterrolebindingclusterrolebindingglobal-hub-migration-migration-service-account-registration 2025-08-18T00:31:02.586Z INFO syncers/migration_to_syncer.go:171 migration Initializing completed: migrationId=test-migration-456 •2025-08-18T00:31:02.711Z INFO syncers/migration_to_syncer.go:69 received migration event from hub1 2025-08-18T00:31:02.711Z INFO syncers/migration_to_syncer.go:296 started the deploying: test-migration-456 2025-08-18T00:31:02.815Z INFO syncers/migration_to_syncer.go:315 finished syncing migration resources •2025-08-18T00:31:03.027Z INFO syncers/migration_to_syncer.go:69 received migration event from global-hub 2025-08-18T00:31:03.027Z INFO syncers/migration_to_syncer.go:163 migration Registering started: migrationId=test-migration-456, clusters=[test-cluster-2] 2025-08-18T00:31:03.127Z INFO syncers/migration_to_syncer.go:231 all 1 managed clusters are ready for migration 2025-08-18T00:31:03.127Z INFO syncers/migration_to_syncer.go:171 migration Registering completed: migrationId=test-migration-456 •2025-08-18T00:31:03.131Z INFO syncers/migration_to_syncer.go:69 received migration event from global-hub 2025-08-18T00:31:03.131Z INFO syncers/migration_to_syncer.go:163 migration Cleaning started: migrationId=test-migration-456, clusters=[] 2025-08-18T00:31:03.131Z INFO syncers/migration_to_syncer.go:633 auto approve user system:serviceaccount::migration-service-account not found in ClusterManager, no removal needed 2025-08-18T00:31:03.132Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-sar 2025-08-18T00:31:03.134Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-sar 2025-08-18T00:31:03.136Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-registration 2025-08-18T00:31:03.137Z INFO syncers/migration_to_syncer.go:171 migration Cleaning completed: migrationId=test-migration-456 •2025-08-18T00:31:03.243Z INFO syncers/migration_to_syncer.go:69 received migration event from global-hub 2025-08-18T00:31:03.243Z INFO syncers/migration_to_syncer.go:163 migration Rollbacking started: migrationId=test-migration-456, clusters=[] 2025-08-18T00:31:03.243Z INFO syncers/migration_to_syncer.go:641 performing rollback for stage: Initializing 2025-08-18T00:31:03.243Z INFO syncers/migration_to_syncer.go:633 auto approve user system:serviceaccount:open-cluster-management-agent-addon:migration-service-account not found in ClusterManager, no removal needed 2025-08-18T00:31:03.243Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-sar 2025-08-18T00:31:03.245Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-registration 2025-08-18T00:31:03.248Z INFO syncers/migration_to_syncer.go:171 migration Rollbacking completed: migrationId=test-migration-456 •2025-08-18T00:31:03.263Z INFO syncers/migration_to_syncer.go:69 received migration event from global-hub 2025-08-18T00:31:03.263Z INFO syncers/migration_to_syncer.go:163 migration Rollbacking started: migrationId=test-migration-456, clusters=[test-cluster-rollback-deploying] 2025-08-18T00:31:03.263Z INFO syncers/migration_to_syncer.go:641 performing rollback for stage: Deploying 2025-08-18T00:31:03.263Z INFO syncers/migration_to_syncer.go:681 rollback deploying stage for clusters: [test-cluster-rollback-deploying] 2025-08-18T00:31:03.265Z INFO syncers/migration_to_syncer.go:730 successfully removed managed cluster: test-cluster-rollback-deploying 2025-08-18T00:31:03.268Z INFO syncers/migration_to_syncer.go:750 successfully removed klusterlet addon config: test-cluster-rollback-deploying 2025-08-18T00:31:03.268Z INFO syncers/migration_to_syncer.go:633 auto approve user system:serviceaccount:open-cluster-management-agent-addon:migration-service-account not found in ClusterManager, no removal needed 2025-08-18T00:31:03.268Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-sar 2025-08-18T00:31:03.270Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-registration 2025-08-18T00:31:03.272Z INFO syncers/migration_to_syncer.go:704 completed deploying stage rollback 2025-08-18T00:31:03.272Z INFO syncers/migration_to_syncer.go:171 migration Rollbacking completed: migrationId=test-migration-456 •2025-08-18T00:31:03.278Z INFO syncers/migration_to_syncer.go:69 received migration event from global-hub 2025-08-18T00:31:03.278Z INFO syncers/migration_to_syncer.go:163 migration Rollbacking started: migrationId=test-migration-456, clusters=[test-cluster-rollback-registering] 2025-08-18T00:31:03.278Z INFO syncers/migration_to_syncer.go:641 performing rollback for stage: Registering 2025-08-18T00:31:03.278Z INFO syncers/migration_to_syncer.go:710 rollback registering stage for clusters: [test-cluster-rollback-registering] 2025-08-18T00:31:03.278Z INFO syncers/migration_to_syncer.go:681 rollback deploying stage for clusters: [test-cluster-rollback-registering] 2025-08-18T00:31:03.280Z INFO syncers/migration_to_syncer.go:730 successfully removed managed cluster: test-cluster-rollback-registering 2025-08-18T00:31:03.280Z INFO syncers/migration_to_syncer.go:740 klusterlet addon config test-cluster-rollback-registering not found, already removed 2025-08-18T00:31:03.280Z INFO syncers/migration_to_syncer.go:633 auto approve user system:serviceaccount:open-cluster-management-agent-addon:migration-service-account not found in ClusterManager, no removal needed 2025-08-18T00:31:03.280Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-sar 2025-08-18T00:31:03.282Z INFO syncers/migration_to_syncer.go:831 deleting resource /global-hub-migration-migration-service-account-registration 2025-08-18T00:31:03.283Z INFO syncers/migration_to_syncer.go:704 completed deploying stage rollback 2025-08-18T00:31:03.283Z INFO syncers/migration_to_syncer.go:171 migration Rollbacking completed: migrationId=test-migration-456 •2025-08-18T00:31:03.290Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:31:03.290Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:31:03.290Z INFO consumer/generic_consumer.go:179 receiver stopped 2025-08-18T00:31:03.290Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:31:03.290430 24971 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ManifestWork" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:03.290500 24971 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.Namespace" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:03.290555 24971 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ClusterRoleBinding" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:03.290639 24971 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ClusterRole" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:03.290690 24971 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ClusterManager" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:03.290751 24971 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1alpha1.KlusterletConfig" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:03.290827 24971 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.KlusterletAddonConfig" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:03.290886 24971 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ManagedCluster" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:03.290946 24971 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.MultiClusterHub" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:03.291019 24971 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.Secret" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:31:03.291Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:31:03.291Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:31:03.291Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager Summarizing 1 Failure: [FAIL] MigrationFromSyncer Error handling scenarios [It] should handle missing managed cluster during deployment /go/src/github.com/stolostron/multicluster-global-hub/test/integration/agent/migration/migration_from_syncer_test.go:510 Ran 16 of 17 Specs in 13.540 seconds FAIL! -- 15 Passed | 1 Failed | 0 Pending | 1 Skipped --- FAIL: TestMigration (13.54s) FAIL FAIL github.com/stolostron/multicluster-global-hub/test/integration/agent/migration 13.570s === RUN TestSyncers Running Suite: Spec Syncers Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/agent/spec ===================================================================================================================== Random Seed: 1755477050 Will run 5 of 5 specs 2025-08-18T00:30:56.746Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:30:56.746Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "Generic"} 2025-08-18T00:30:56.746Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "ManagedClustersLabels"} 2025-08-18T00:30:56.746Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "MigrationSourceHubCluster"} 2025-08-18T00:30:56.746Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "MigrationTargetHubCluster"} 2025-08-18T00:30:56.746Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "Resync"} 2025-08-18T00:30:56.746Z INFO spec/spec.go:55 added the spec controllers to manager 2025-08-18T00:30:56.746Z INFO workers/worker_pool.go:62 starting worker pool {"size": 2} 2025-08-18T00:30:56.746Z INFO spec/dispatcher.go:51 started dispatching received bundles... 2025-08-18T00:30:56.748Z INFO spec worker 1 workers/worker.go:46 start running worker {"Id: ": 1} 2025-08-18T00:30:56.750Z INFO spec worker 2 workers/worker.go:46 start running worker {"Id: ": 2} map[test:add vendor:OpenShift] •2025-08-18T00:30:58.958Z INFO status-resyncer syncers/resync_syncer.go:43 resyncing event type {"eventType": "unknownMsg"} 2025-08-18T00:30:58.958Z INFO status-resyncer syncers/resync_syncer.go:48 event type unknownMsg is not registered for resync 2025-08-18T00:30:58.958Z INFO status-resyncer syncers/resync_syncer.go:43 resyncing event type {"eventType": "managedhub.info"} •create spec resource: { "kind": "Placement", "apiVersion": "cluster.open-cluster-management.io/v1beta1", "metadata": { "name": "test-placements", "namespace": "default", "uid": "a13bba04-8dd4-49ec-8d41-33955e9a2f38", "resourceVersion": "351", "generation": 1, "creationTimestamp": "2025-08-18T00:30:59Z", "annotations": { "global-hub.open-cluster-management.io/origin-ownerreference-uid": "8f1e1b72-1f82-4dcf-bd2d-2554fc18e7da" }, "managedFields": [ { "manager": "spec.test", "operation": "Update", "apiVersion": "cluster.open-cluster-management.io/v1beta1", "time": "2025-08-18T00:30:59Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { ".": {}, "f:global-hub.open-cluster-management.io/origin-ownerreference-uid": {} } }, "f:spec": { ".": {}, "f:clusterSets": {}, "f:prioritizerPolicy": { ".": {}, "f:mode": {} } } } } ] }, "spec": { "clusterSets": [ "cluster1", "cluster2" ], "prioritizerPolicy": { "mode": "Additive" }, "spreadPolicy": {}, "decisionStrategy": { "groupStrategy": { "clustersPerDecisionGroup": 0 } } }, "status": { "numberOfSelectedClusters": 0, "decisionGroups": null, "conditions": null } } •create spec resource: { "kind": "PlacementBinding", "apiVersion": "policy.open-cluster-management.io/v1", "metadata": { "name": "test-placementbinding", "namespace": "default", "uid": "d09a6846-9cf4-4c8c-b669-6275bc4490dc", "resourceVersion": "352", "generation": 1, "creationTimestamp": "2025-08-18T00:30:59Z", "annotations": { "global-hub.open-cluster-management.io/origin-ownerreference-uid": "02084199-7a2d-405f-8f73-f8ca142a2cd9" }, "managedFields": [ { "manager": "spec.test", "operation": "Update", "apiVersion": "policy.open-cluster-management.io/v1", "time": "2025-08-18T00:30:59Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { ".": {}, "f:global-hub.open-cluster-management.io/origin-ownerreference-uid": {} } }, "f:placementRef": { ".": {}, "f:apiGroup": {}, "f:kind": {}, "f:name": {} }, "f:subjects": {} } } ] }, "placementRef": { "apiGroup": "cluster.open-cluster-management.io", "kind": "Placement", "name": "placement-policy-limitrange" }, "subjects": [ { "apiGroup": "policy.open-cluster-management.io", "kind": "Policy", "name": "policy-limitrange" } ], "bindingOverrides": {}, "status": {} } ••2025-08-18T00:30:59.367Z INFO consumer/generic_consumer.go:179 receiver stopped 2025-08-18T00:30:59.367Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:30:59.367Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:30:59.367Z INFO spec/dispatcher.go:56 stopped dispatching bundles 2025-08-18T00:30:59.367Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:30:59.367382 24977 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:30:59.367454 24977 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.PlacementBinding" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:30:59.367509 24977 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1beta1.Placement" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:30:59.367568 24977 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ManagedCluster" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:30:59.367Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:30:59.367Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:30:59.367Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager Ran 5 of 5 Specs in 9.584 seconds SUCCESS! -- 5 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestSyncers (9.58s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/agent/spec 9.612s === RUN TestControllers Running Suite: Status Controller Integration Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/agent/status ======================================================================================================================================== Random Seed: 1755477050 Will run 26 of 26 specs 2025-08-18T00:30:56.353Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:30:56.353Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:30:56.353Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:30:56.353Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:30:56.353Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:30:56.353Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:30:56.353Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:30:56.353Z INFO controller/controller.go:183 Starting Controller {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:30:56.353Z INFO controller/controller.go:217 Starting workers {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy", "worker count": 1} 2025-08-18T00:30:56.353Z INFO controller/controller.go:132 Starting EventSource {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy", "source": "kind source: *v1.Policy"} 2025-08-18T00:30:56.353Z INFO generic/periodic_syncer.go:69 Registered emitter for event type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec 2025-08-18T00:30:56.354Z INFO controller/controller.go:183 Starting Controller {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement"} 2025-08-18T00:30:56.354Z INFO controller/controller.go:217 Starting workers {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement", "worker count": 1} 2025-08-18T00:30:56.354Z INFO controller/controller.go:175 Starting EventSource {"controller": "policy.localspec", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy", "source": "kind source: *v1.Policy"} 2025-08-18T00:30:56.354Z INFO controller/controller.go:183 Starting Controller {"controller": "policy.localspec", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:30:56.354Z INFO controller/controller.go:132 Starting EventSource {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement", "source": "kind source: *v1beta1.Placement"} 2025-08-18T00:30:56.354Z INFO controller/controller.go:183 Starting Controller {"controller": "configmap", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:30:56.354Z INFO controller/controller.go:217 Starting workers {"controller": "configmap", "controllerGroup": "", "controllerKind": "ConfigMap", "worker count": 1} 2025-08-18T00:30:56.354Z INFO controller/controller.go:132 Starting EventSource {"controller": "configmap", "controllerGroup": "", "controllerKind": "ConfigMap", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:30:56.354Z INFO controller/controller.go:175 Starting EventSource {"controller": "placementdecision", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "PlacementDecision", "source": "kind source: *v1beta1.PlacementDecision"} 2025-08-18T00:30:56.354Z INFO controller/controller.go:183 Starting Controller {"controller": "placementdecision", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "PlacementDecision"} 2025-08-18T00:30:56.354Z INFO status.hub_cluster_heartbeat generic/multi_object_syncer.go:78 sync interval has been reset to 2s 2025-08-18T00:30:56.455Z INFO controller/controller.go:217 Starting workers {"controller": "policy.localspec", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy", "worker count": 1} 2025-08-18T00:30:56.456Z INFO controller/controller.go:217 Starting workers {"controller": "placementdecision", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "PlacementDecision", "worker count": 1} 2025-08-18T00:30:56.459Z INFO controller/controller.go:183 Starting Controller {"controller": "route", "controllerGroup": "route.openshift.io", "controllerKind": "Route"} 2025-08-18T00:30:56.459Z INFO controller/controller.go:217 Starting workers {"controller": "route", "controllerGroup": "route.openshift.io", "controllerKind": "Route", "worker count": 1} 2025-08-18T00:30:56.459Z INFO status.hub_cluster_info generic/multi_object_syncer.go:78 sync interval has been reset to 2s 2025-08-18T00:30:56.459Z INFO controller/controller.go:175 Starting EventSource {"controller": "clusterversion", "controllerGroup": "config.openshift.io", "controllerKind": "ClusterVersion", "source": "kind source: *v1.ClusterVersion"} 2025-08-18T00:30:56.459Z INFO controller/controller.go:183 Starting Controller {"controller": "clusterversion", "controllerGroup": "config.openshift.io", "controllerKind": "ClusterVersion"} 2025-08-18T00:30:56.459Z INFO controller/controller.go:132 Starting EventSource {"controller": "route", "controllerGroup": "route.openshift.io", "controllerKind": "Route", "source": "kind source: *v1.Route"} 2025-08-18T00:30:56.459Z INFO generic/periodic_syncer.go:69 Registered emitter for event type: io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster 2025-08-18T00:30:56.459Z INFO controller/controller.go:183 Starting Controller {"controller": "subscriptionreport", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "SubscriptionReport"} 2025-08-18T00:30:56.459Z INFO controller/controller.go:217 Starting workers {"controller": "subscriptionreport", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "SubscriptionReport", "worker count": 1} 2025-08-18T00:30:56.459Z INFO controller/controller.go:175 Starting EventSource {"controller": "managedcluster", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedCluster", "source": "kind source: *v1.ManagedCluster"} 2025-08-18T00:30:56.459Z INFO controller/controller.go:183 Starting Controller {"controller": "managedcluster", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedCluster"} 2025-08-18T00:30:56.459Z INFO controller/controller.go:132 Starting EventSource {"controller": "subscriptionreport", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "SubscriptionReport", "source": "kind source: *v1alpha1.SubscriptionReport"} 2025-08-18T00:30:56.460Z INFO controller/controller.go:175 Starting EventSource {"controller": "event", "controllerGroup": "", "controllerKind": "Event", "source": "kind source: *v1.Event"} 2025-08-18T00:30:56.460Z INFO controller/controller.go:183 Starting Controller {"controller": "event", "controllerGroup": "", "controllerKind": "Event"} 2025-08-18T00:30:56.462Z INFO configmap/config_controller.go:96 resync.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:30:56.462Z INFO configmap/config_controller.go:96 managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:30:56.462Z INFO configmap/config_controller.go:96 resync.policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:30:56.462Z INFO configmap/config_controller.go:96 policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:30:56.462Z INFO configmap/config_controller.go:96 resync.managedhub.info sync interval not defined in configmap, using default value 2025-08-18T00:30:56.462Z INFO configmap/config_controller.go:96 managedhub.info sync interval not defined in configmap, using default value 2025-08-18T00:30:56.462Z INFO configmap/config_controller.go:96 resync.managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:30:56.462Z INFO configmap/config_controller.go:96 managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:30:56.462Z INFO configmap/config_controller.go:96 resync.event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:30:56.462Z INFO configmap/config_controller.go:96 event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:30:56.462Z INFO configmap/config_controller.go:112 aggregationLevel not defined in agentConfig, using default value 2025-08-18T00:30:56.462Z INFO configmap/config_controller.go:112 enableLocalPolicies not defined in agentConfig, using default value 2025-08-18T00:30:56.562Z INFO controller/controller.go:217 Starting workers {"controller": "clusterversion", "controllerGroup": "config.openshift.io", "controllerKind": "ClusterVersion", "worker count": 1} 2025-08-18T00:30:56.564Z INFO controller/controller.go:217 Starting workers {"controller": "managedcluster", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedCluster", "worker count": 1} 2025-08-18T00:30:56.565Z INFO controller/controller.go:217 Starting workers {"controller": "event", "controllerGroup": "", "controllerKind": "Event", "worker count": 1} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.subscription.report source: hub1 id: 7161ffd1-a224-4a23-b155-696ecc86ee56 time: 2025-08-18T00:31:01.460739396Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "kind": "SubscriptionReport", "apiVersion": "apps.open-cluster-management.io/v1alpha1", "metadata": { "name": "test-subscriptionreport-1", "namespace": "default", "uid": "a369a5c8-2ae5-49b2-8f90-2dce83a18f3d", "resourceVersion": "351", "generation": 1, "creationTimestamp": "2025-08-18T00:30:56Z" }, "reportType": "Application", "summary": { "deployed": "1", "inProgress": "0", "failed": "0", "propagationFailed": "0", "clusters": "1" }, "results": [ { "source": "hub1-mc1", "timestamp": { "seconds": 0, "nanos": 0 }, "result": "deployed" } ], "resources": [ { "kind": "Deployment", "namespace": "default", "name": "nginx-sample", "apiVersion": "apps/v1" } ] } ] •2025-08-18T00:31:01.465Z INFO configmap/config_controller.go:105 setting resync.managedcluster interval to 30m0s 2025-08-18T00:31:01.465Z INFO configmap/config_controller.go:96 managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:31:01.465Z INFO configmap/config_controller.go:105 setting resync.policy.localspec interval to 45m0s 2025-08-18T00:31:01.465Z INFO configmap/config_controller.go:105 setting policy.localspec interval to 3s 2025-08-18T00:31:01.465Z INFO configmap/config_controller.go:105 setting resync.managedhub.info interval to 2h0m0s 2025-08-18T00:31:01.465Z INFO configmap/config_controller.go:105 setting managedhub.info interval to 2s 2025-08-18T00:31:01.466Z INFO configmap/config_controller.go:105 setting resync.managedhub.heartbeat interval to 20m0s 2025-08-18T00:31:01.466Z INFO configmap/config_controller.go:105 setting managedhub.heartbeat interval to 2s 2025-08-18T00:31:01.466Z INFO configmap/config_controller.go:105 setting resync.event.managedcluster interval to 25m0s 2025-08-18T00:31:01.466Z INFO configmap/config_controller.go:105 setting event.managedcluster interval to 3s 2025-08-18T00:31:01.466Z INFO configmap/config_controller.go:112 aggregationLevel not defined in agentConfig, using default value 2025-08-18T00:31:01.466Z INFO configmap/config_controller.go:112 enableLocalPolicies not defined in agentConfig, using default value •2025-08-18T00:31:01.469Z ERROR configmap/config_controller.go:102 failed to parse resync.managedcluster sync interval: time: invalid duration "also-invalid" github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap.(*hubOfHubsConfigController).setSyncInterval /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap/config_controller.go:102 github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap.(*hubOfHubsConfigController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap/config_controller.go:65 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:01.469Z ERROR configmap/config_controller.go:102 failed to parse managedcluster sync interval: time: invalid duration "invalid-duration" github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap.(*hubOfHubsConfigController).setSyncInterval /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap/config_controller.go:102 github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap.(*hubOfHubsConfigController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap/config_controller.go:66 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:01.469Z INFO configmap/config_controller.go:96 resync.policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:31:01.469Z INFO configmap/config_controller.go:96 policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:31:01.469Z INFO configmap/config_controller.go:96 resync.managedhub.info sync interval not defined in configmap, using default value 2025-08-18T00:31:01.469Z INFO configmap/config_controller.go:105 setting managedhub.info interval to 3s 2025-08-18T00:31:01.469Z INFO configmap/config_controller.go:96 resync.managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:31:01.469Z INFO configmap/config_controller.go:96 managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:31:01.469Z INFO configmap/config_controller.go:96 resync.event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:31:01.469Z INFO configmap/config_controller.go:96 event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:31:01.469Z INFO configmap/config_controller.go:112 aggregationLevel not defined in agentConfig, using default value 2025-08-18T00:31:01.469Z INFO configmap/config_controller.go:112 enableLocalPolicies not defined in agentConfig, using default value 2025-08-18T00:31:02.460Z INFO status.hub_cluster_info generic/multi_object_syncer.go:92 sync interval has been reset to 3s •2025-08-18T00:31:03.474Z INFO configmap/config_controller.go:96 resync.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:31:03.474Z INFO configmap/config_controller.go:96 managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:31:03.474Z INFO configmap/config_controller.go:96 resync.policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:31:03.474Z INFO configmap/config_controller.go:96 policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:31:03.474Z INFO configmap/config_controller.go:96 resync.managedhub.info sync interval not defined in configmap, using default value 2025-08-18T00:31:03.474Z INFO configmap/config_controller.go:96 managedhub.info sync interval not defined in configmap, using default value 2025-08-18T00:31:03.474Z INFO configmap/config_controller.go:96 resync.managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:31:03.474Z INFO configmap/config_controller.go:96 managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:31:03.474Z INFO configmap/config_controller.go:96 resync.event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:31:03.474Z INFO configmap/config_controller.go:96 event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:31:03.474Z INFO configmap/config_controller.go:112 aggregationLevel not defined in agentConfig, using default value 2025-08-18T00:31:03.474Z INFO logger/level.go:37 set the logLevel: debug 2025-08-18T00:31:03.474Z DEBUG configmap/config_controller.go:89 Reconciliation complete. {"Request.Namespace": "multicluster-global-hub-agent", "Request.Name": "multicluster-global-hub-agent-config"} 2025-08-18T00:31:04.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} •2025-08-18T00:31:06.354Z INFO status.placement_decision generic/multi_event_syncer.go:147 sync interval has been reset to 3s 2025-08-18T00:31:06.354Z INFO status.policy generic/multi_event_syncer.go:147 sync interval has been reset to 3s 2025-08-18T00:31:06.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:06.354Z INFO status.placement generic/multi_event_syncer.go:147 sync interval has been reset to 3s 2025-08-18T00:31:06.460Z INFO status.subscription_report generic/multi_event_syncer.go:147 sync interval has been reset to 3s 2025-08-18T00:31:06.460Z INFO status.event generic/multi_event_syncer.go:147 sync interval has been reset to 3s 2025-08-18T00:31:07.488Z INFO configmap/config_controller.go:96 resync.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:31:07.488Z INFO configmap/config_controller.go:96 managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:31:07.488Z INFO configmap/config_controller.go:96 resync.policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:31:07.488Z INFO configmap/config_controller.go:96 policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:31:07.488Z INFO configmap/config_controller.go:96 resync.managedhub.info sync interval not defined in configmap, using default value 2025-08-18T00:31:07.488Z INFO configmap/config_controller.go:105 setting managedhub.info interval to 1s 2025-08-18T00:31:07.488Z INFO configmap/config_controller.go:96 resync.managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:31:07.488Z INFO configmap/config_controller.go:96 managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:31:07.488Z INFO configmap/config_controller.go:96 resync.event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:31:07.488Z INFO configmap/config_controller.go:96 event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:31:07.488Z INFO configmap/config_controller.go:112 aggregationLevel not defined in agentConfig, using default value 2025-08-18T00:31:07.488Z INFO configmap/config_controller.go:112 enableLocalPolicies not defined in agentConfig, using default value 2025-08-18T00:31:07.488Z DEBUG configmap/config_controller.go:89 Reconciliation complete. {"Request.Namespace": "multicluster-global-hub-agent", "Request.Name": "multicluster-global-hub-agent-config"} •2025-08-18T00:31:07.493Z INFO configmap/config_controller.go:96 resync.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:31:07.493Z INFO configmap/config_controller.go:96 managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:31:07.493Z INFO configmap/config_controller.go:96 resync.policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:31:07.493Z INFO configmap/config_controller.go:96 policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:31:07.493Z INFO configmap/config_controller.go:96 resync.managedhub.info sync interval not defined in configmap, using default value 2025-08-18T00:31:07.493Z INFO configmap/config_controller.go:96 managedhub.info sync interval not defined in configmap, using default value 2025-08-18T00:31:07.493Z INFO configmap/config_controller.go:96 resync.managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:31:07.493Z INFO configmap/config_controller.go:96 managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:31:07.493Z INFO configmap/config_controller.go:96 resync.event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:31:07.494Z INFO configmap/config_controller.go:96 event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:31:07.494Z INFO configmap/config_controller.go:112 aggregationLevel not defined in agentConfig, using default value 2025-08-18T00:31:07.494Z INFO configmap/config_controller.go:112 enableLocalPolicies not defined in agentConfig, using default value 2025-08-18T00:31:07.494Z DEBUG configmap/config_controller.go:89 Reconciliation complete. {"Request.Namespace": "multicluster-global-hub-agent", "Request.Name": "multicluster-global-hub-agent-config"} 2025-08-18T00:31:08.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:08.461Z INFO status.hub_cluster_info generic/multi_object_syncer.go:92 sync interval has been reset to 1s •2025-08-18T00:31:09.499Z ERROR configmap/config_controller.go:102 failed to parse resync.managedcluster sync interval: time: invalid duration "not-a-duration" github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap.(*hubOfHubsConfigController).setSyncInterval /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap/config_controller.go:102 github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap.(*hubOfHubsConfigController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/agent/pkg/status/syncers/configmap/config_controller.go:65 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:09.499Z INFO configmap/config_controller.go:105 setting managedcluster interval to 4s 2025-08-18T00:31:09.499Z INFO configmap/config_controller.go:96 resync.policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:31:09.499Z INFO configmap/config_controller.go:96 policy.localspec sync interval not defined in configmap, using default value 2025-08-18T00:31:09.499Z INFO configmap/config_controller.go:105 setting resync.managedhub.info interval to 35m0s 2025-08-18T00:31:09.499Z INFO configmap/config_controller.go:105 setting managedhub.info interval to 3s 2025-08-18T00:31:09.499Z INFO configmap/config_controller.go:96 resync.managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:31:09.499Z INFO configmap/config_controller.go:96 managedhub.heartbeat sync interval not defined in configmap, using default value 2025-08-18T00:31:09.499Z INFO configmap/config_controller.go:96 resync.event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:31:09.499Z INFO configmap/config_controller.go:96 event.managedcluster sync interval not defined in configmap, using default value 2025-08-18T00:31:09.499Z INFO configmap/config_controller.go:112 aggregationLevel not defined in agentConfig, using default value 2025-08-18T00:31:09.499Z INFO configmap/config_controller.go:112 enableLocalPolicies not defined in agentConfig, using default value 2025-08-18T00:31:09.499Z DEBUG configmap/config_controller.go:89 Reconciliation complete. {"Request.Namespace": "multicluster-global-hub-agent", "Request.Name": "multicluster-global-hub-agent-config"} 2025-08-18T00:31:10.355Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:10.462Z INFO status.hub_cluster_info generic/multi_object_syncer.go:92 sync interval has been reset to 3s •2025-08-18T00:31:12.364Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:12.461Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "event.localrootpolicy"} >>>>>>>>>>>>>>>>>>> root policy event1 Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.event.localrootpolicy source: hub1 id: 84533d48-9eb4-4f6b-8036-f4a68f4db3c4 time: 2025-08-18T00:31:12.461071201Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "eventName": "event-local-policy.123r543243242", "eventNamespace": "default", "message": "Policy default/policy1 was propagated to cluster1", "reason": "PolicyPropagation", "source": { "component": "policy-propagator" }, "createdAt": "2025-08-18T00:31:11Z", "policyId": "53865009-2bb3-4e7d-a819-6b9bdad76237", "compliance": "Unknown" } ] •2025-08-18T00:31:13.361Z DEBUG emitters/object_emitter.go:281 sending cloudevents: Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec source: hub1 id: datacontenttype: application/json Extensions, extversion: 3.1 Data (binary), { "update": [ { "kind": "Policy", "apiVersion": "policy.open-cluster-management.io/v1", "metadata": { "name": "event-local-policy", "namespace": "default", "uid": "53865009-2bb3-4e7d-a819-6b9bdad76237", "resourceVersion": "361", "generation": 1, "creationTimestamp": "2025-08-18T00:31:11Z" }, "spec": { "disabled": true, "policy-templates": [] }, "status": {} } ] } 2025-08-18T00:31:13.361Z DEBUG emitters/object_emitter.go:290 sending {"type": "policy.localspec", "create": 0, "update": 1, "delete": 0, "resync": 0, "resync_metadata": 0} 2025-08-18T00:31:13.361Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.localspec"} 2025-08-18T00:31:14.355Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} >>>>>>> not get the new event: policy1.newer.123r543245555 [ { "eventName": "event-local-policy.123r543243242", "eventNamespace": "default", "message": "Policy default/policy1 was propagated to cluster1", "reason": "PolicyPropagation", "source": { "component": "policy-propagator" }, "createdAt": "2025-08-18T00:31:11Z", "policyId": "53865009-2bb3-4e7d-a819-6b9bdad76237", "compliance": "Unknown" } ] 2025-08-18T00:31:16.413Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} >>>>>>> not get the new event: policy1.newer.123r543245555 [ { "eventName": "event-local-policy.123r543243242", "eventNamespace": "default", "message": "Policy default/policy1 was propagated to cluster1", "reason": "PolicyPropagation", "source": { "component": "policy-propagator" }, "createdAt": "2025-08-18T00:31:11Z", "policyId": "53865009-2bb3-4e7d-a819-6b9bdad76237", "compliance": "Unknown" } ] >>>>>>> not get the new event: policy1.newer.123r543245555 [ { "eventName": "event-local-policy.123r543243242", "eventNamespace": "default", "message": "Policy default/policy1 was propagated to cluster1", "reason": "PolicyPropagation", "source": { "component": "policy-propagator" }, "createdAt": "2025-08-18T00:31:11Z", "policyId": "53865009-2bb3-4e7d-a819-6b9bdad76237", "compliance": "Unknown" } ] 2025-08-18T00:31:18.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:18.462Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "event.localrootpolicy"} >>>>>>>>>>>>>>>>>>> root policy event2 Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.event.localrootpolicy source: hub1 id: df39816d-0634-4bdd-ab51-8b968f52beb6 time: 2025-08-18T00:31:18.462034521Z datacontenttype: application/json Extensions, extversion: 1.2 Data, [ { "eventName": "policy1.newer.123r543245555", "eventNamespace": "default", "message": "Policy default/policy1 was propagated to cluster3", "reason": "PolicyPropagation", "source": { "component": "policy-propagator" }, "createdAt": "2025-08-18T00:31:15Z", "policyId": "53865009-2bb3-4e7d-a819-6b9bdad76237", "compliance": "Unknown" } ] •SContext Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec source: hub1 id: e4e7b98d-50ed-4b44-941e-6e26177b7fc4 time: 2025-08-18T00:31:13.361888077Z datacontenttype: application/json Extensions, extversion: 3.1 Data (binary), { "update": [ { "kind": "Policy", "apiVersion": "policy.open-cluster-management.io/v1", "metadata": { "name": "event-local-policy", "namespace": "default", "uid": "53865009-2bb3-4e7d-a819-6b9bdad76237", "resourceVersion": "361", "generation": 1, "creationTimestamp": "2025-08-18T00:31:11Z" }, "spec": { "disabled": true, "policy-templates": [] }, "status": {} } ] } 2025-08-18T00:31:20.355Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:21.356Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.completecompliance"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.completecompliance source: hub1 id: c1870a25-8944-43a0-85d9-4c30d7bbdb48 time: 2025-08-18T00:31:21.35685504Z datacontenttype: application/json Extensions, extdependencyversion: 1.1 extversion: 0.1 Data, [ { "policyId": "test-globalpolicy-uid", "nonCompliantClusters": [ "hub1-mc2", "hub1-mc3" ], "unknownComplianceClusters": [], "pendingComplianceClusters": [] } ] 2025-08-18T00:31:21.357Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.compliance"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.compliance source: hub1 id: 7280d6fa-c897-48ea-ae5c-d602160b860b time: 2025-08-18T00:31:21.356806389Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "policyId": "test-globalpolicy-uid", "compliantClusters": [ "hub1-mc1" ], "nonCompliantClusters": [ "hub1-mc2", "hub1-mc3" ], "unknownComplianceClusters": [], "pendingComplianceClusters": [] } ] •2025-08-18T00:31:22.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:24.358Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:24.358Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.completecompliance"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.completecompliance source: hub1 id: ee62c2dd-ab4e-4b42-8045-3f1e6a6948b5 time: 2025-08-18T00:31:24.358883064Z datacontenttype: application/json Extensions, extdependencyversion: 1.1 extversion: 1.2 Data, [ { "policyId": "test-globalpolicy-uid", "nonCompliantClusters": [ "hub1-mc3" ], "unknownComplianceClusters": [], "pendingComplianceClusters": [] } ] •2025-08-18T00:31:26.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:27.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.compliance"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.compliance source: hub1 id: 8805911e-05d7-4a14-8e30-eb346758e990 time: 2025-08-18T00:31:27.354738433Z datacontenttype: application/json Extensions, extversion: 1.2 Data, [ { "policyId": "test-globalpolicy-uid", "compliantClusters": [ "hub1-mc1" ], "nonCompliantClusters": [ "hub1-mc3" ], "unknownComplianceClusters": [], "pendingComplianceClusters": [] } ] •2025-08-18T00:31:28.355Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:30.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.compliance"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.compliance source: hub1 id: fbc74379-32a9-4566-99d1-ac899c23276c time: 2025-08-18T00:31:30.354759396Z datacontenttype: application/json Extensions, extversion: 2.3 Data, [ { "policyId": "test-globalpolicy-uid", "compliantClusters": [], "nonCompliantClusters": [], "unknownComplianceClusters": [], "pendingComplianceClusters": [] } ] •2025-08-18T00:31:30.355Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:30.461Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "event.clustergroupupgrade"} >>>>>>>>>>>>>>>>>>> cgu event Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.event.clustergroupupgrade source: hub1 id: 892c688a-187c-46e0-ab44-36c8b50e5c33 time: 2025-08-18T00:31:30.461489131Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "eventNamespace": "cgu-ns1", "eventName": "cgu-ns1.event.17cd34e8c8b27fdd", "eventAnnotations": { "cgu.openshift.io/event-type": "global", "cgu.openshift.io/total-clusters-count": "2" }, "cguName": "test-cgu1", "leafHubName": "hub1", "message": "ClusterGroupUpgrade test-cgu1 succeeded remediating policies", "reason": "CguSuccess", "reportingController": "cgu-controller", "reportingInstance": "cgu-controller-6794cf54d9-j7lgm", "type": "Normal", "createdAt": "2025-08-18T00:31:30Z" } ] •Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.managedhub.heartbeat source: hub1 id: 8fb43f87-1f3c-4763-aa53-0101cecc18ba time: 2025-08-18T00:30:56.354394158Z datacontenttype: application/json Extensions, extversion: 0.0 Data, [] •2025-08-18T00:31:30.573Z DEBUG status.&TypeMeta{Kind:,APIVersion:,} generic/multi_object_syncer.go:187 Reconciliation complete. {"Namespace": "", "Name": "version"} 2025-08-18T00:31:30.574Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.signatureStores" 2025-08-18T00:31:31.463Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.info"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.managedhub.info source: hub1 id: fe4f29a8-415a-4af5-862b-3a5781b63c4f time: 2025-08-18T00:31:31.463693874Z datacontenttype: application/json Extensions, extversion: 0.1 Data, { "consoleURL": "", "grafanaURL": "", "mchVersion": "", "clusterId": "00000000-0000-0000-0000-000000000001" } 2025-08-18T00:31:31.472Z DEBUG status.&TypeMeta{Kind:,APIVersion:,} generic/multi_object_syncer.go:187 Reconciliation complete. {"Namespace": "openshift-console", "Name": "console"} 2025-08-18T00:31:31.476Z DEBUG status.&TypeMeta{Kind:,APIVersion:,} generic/multi_object_syncer.go:187 Reconciliation complete. {"Namespace": "open-cluster-management-observability", "Name": "grafana"} 2025-08-18T00:31:32.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:34.356Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:34.463Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.info"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.managedhub.info source: hub1 id: ee2abb18-dbba-428a-8868-30eec4f0e5c9 time: 2025-08-18T00:31:34.462934731Z datacontenttype: application/json Extensions, extversion: 1.3 Data, { "consoleURL": "https://console-openshift-console.apps.test-cluster", "grafanaURL": "https://grafana-open-cluster-management-observability.apps.test-cluster", "mchVersion": "", "clusterId": "00000000-0000-0000-0000-000000000001" } •2025-08-18T00:31:36.354Z DEBUG emitters/object_emitter.go:281 sending cloudevents: Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec source: hub1 id: datacontenttype: application/json Extensions, extversion: 9.2 Data (binary), { "update": [ { "kind": "Policy", "apiVersion": "policy.open-cluster-management.io/v1", "metadata": { "name": "root-policy-test123", "namespace": "default", "uid": "9ce2e80c-a8ad-4506-9b3b-a07daa1245b0", "resourceVersion": "384", "generation": 1, "creationTimestamp": "2025-08-18T00:31:34Z" }, "spec": { "disabled": true, "policy-templates": [] }, "status": {} } ] } 2025-08-18T00:31:36.354Z DEBUG emitters/object_emitter.go:290 sending {"type": "policy.localspec", "create": 0, "update": 1, "delete": 0, "resync": 0, "resync_metadata": 0} 2025-08-18T00:31:36.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.localspec"} ============================ create policy -> policy spec event: disabled Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec source: hub1 id: 5e110148-d388-464a-a34e-02c546281a3f time: 2025-08-18T00:31:36.354241125Z datacontenttype: application/json Extensions, extversion: 9.2 Data (binary), { "update": [ { "kind": "Policy", "apiVersion": "policy.open-cluster-management.io/v1", "metadata": { "name": "root-policy-test123", "namespace": "default", "uid": "9ce2e80c-a8ad-4506-9b3b-a07daa1245b0", "resourceVersion": "384", "generation": 1, "creationTimestamp": "2025-08-18T00:31:34Z" }, "spec": { "disabled": true, "policy-templates": [] }, "status": {} } ] } 2025-08-18T00:31:36.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:38.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:40.353Z DEBUG emitters/object_emitter.go:281 sending cloudevents: Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec source: hub1 id: datacontenttype: application/json Extensions, extversion: 10.3 Data (binary), { "update": [ { "kind": "Policy", "apiVersion": "policy.open-cluster-management.io/v1", "metadata": { "name": "root-policy-test123", "namespace": "default", "uid": "9ce2e80c-a8ad-4506-9b3b-a07daa1245b0", "resourceVersion": "385", "generation": 2, "creationTimestamp": "2025-08-18T00:31:34Z" }, "spec": { "disabled": false, "policy-templates": [] }, "status": {} } ] } 2025-08-18T00:31:40.353Z DEBUG emitters/object_emitter.go:290 sending {"type": "policy.localspec", "create": 0, "update": 1, "delete": 0, "resync": 0, "resync_metadata": 0} 2025-08-18T00:31:40.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.localspec"} ============================ update policy -> policy spec event: enabled Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec source: hub1 id: 737552f4-ca3e-42b3-9a92-8908a27ec00b time: 2025-08-18T00:31:40.353948457Z datacontenttype: application/json Extensions, extversion: 10.3 Data (binary), { "update": [ { "kind": "Policy", "apiVersion": "policy.open-cluster-management.io/v1", "metadata": { "name": "root-policy-test123", "namespace": "default", "uid": "9ce2e80c-a8ad-4506-9b3b-a07daa1245b0", "resourceVersion": "385", "generation": 2, "creationTimestamp": "2025-08-18T00:31:34Z" }, "spec": { "disabled": false, "policy-templates": [] }, "status": {} } ] } •2025-08-18T00:31:40.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:42.355Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:42.355Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.localcompliance"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcompliance source: hub1 id: 6093b922-034e-49d3-9115-a23bc6eb7bcb time: 2025-08-18T00:31:42.355480661Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "policyId": "9ce2e80c-a8ad-4506-9b3b-a07daa1245b0", "compliantClusters": [ "policy-cluster1" ], "nonCompliantClusters": [], "unknownComplianceClusters": [], "pendingComplianceClusters": [] } ] •2025-08-18T00:31:44.355Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:45.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "policy.localcompletecompliance"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcompletecompliance source: hub1 id: 43f844ab-6846-4ac1-9e3b-e1911313e6a2 time: 2025-08-18T00:31:45.354804563Z datacontenttype: application/json Extensions, extdependencyversion: 1.1 extversion: 0.1 Data, [ { "policyId": "9ce2e80c-a8ad-4506-9b3b-a07daa1245b0", "nonCompliantClusters": [ "policy-cluster1" ], "unknownComplianceClusters": [], "pendingComplianceClusters": [] } ] •2025-08-18T00:31:46.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:47.354Z DEBUG emitters/object_emitter.go:281 sending cloudevents: Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster source: hub1 id: datacontenttype: application/json Extensions, extversion: 10.1 Data (binary), { "update": [ { "kind": "ManagedCluster", "apiVersion": "cluster.open-cluster-management.io/v1", "metadata": { "name": "policy-cluster1", "uid": "3afc63d4-45fa-441c-9cf8-0af70216093a", "resourceVersion": "392", "generation": 1, "creationTimestamp": "2025-08-18T00:31:45Z", "annotations": { "global-hub.open-cluster-management.io/managed-by": "hub1" } }, "spec": { "hubAcceptsClient": false, "leaseDurationSeconds": 60 }, "status": { "conditions": null, "version": {}, "clusterClaims": [ { "name": "id.k8s.io", "value": "3f406177-34b2-4852-88dd-ff2809680336" } ] } } ] } 2025-08-18T00:31:47.354Z DEBUG emitters/object_emitter.go:290 sending {"type": "managedcluster", "create": 0, "update": 1, "delete": 0, "resync": 0, "resync_metadata": 0} 2025-08-18T00:31:47.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedcluster"} 2025-08-18T00:31:48.355Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:48.355Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "event.localreplicatedpolicy"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.event.localreplicatedpolicy source: hub1 id: 350a027b-527e-4758-83bc-80668a77b56a time: 2025-08-18T00:31:48.355018152Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "eventName": "default.root-policy-test123.17b0db2427432200", "eventNamespace": "policy-cluster1", "message": "NonCompliant; violation - limitranges [container-mem-limit-range] not found in namespace\n\t\t\t\t\t\t\tdefault", "reason": "PolicyStatusSync", "count": 1, "source": { "component": "policy-status-history-sync" }, "createdAt": "2025-08-18T00:31:45Z", "policyId": "9ce2e80c-a8ad-4506-9b3b-a07daa1245b0", "clusterId": "3f406177-34b2-4852-88dd-ff2809680336", "clusterName": "policy-cluster1", "compliance": "NonCompliant" } ] •2025-08-18T00:31:48.359Z INFO KubeAPIWarningLogger log/warning_handler.go:65 metadata.finalizers: "cleaning-up": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers init cluster Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster source: hub1 id: 756ef6e7-d47f-4372-9e2f-723a70b620da time: 2025-08-18T00:31:47.35475411Z datacontenttype: application/json Extensions, extversion: 10.1 Data (binary), { "update": [ { "kind": "ManagedCluster", "apiVersion": "cluster.open-cluster-management.io/v1", "metadata": { "name": "policy-cluster1", "uid": "3afc63d4-45fa-441c-9cf8-0af70216093a", "resourceVersion": "392", "generation": 1, "creationTimestamp": "2025-08-18T00:31:45Z", "annotations": { "global-hub.open-cluster-management.io/managed-by": "hub1" } }, "spec": { "hubAcceptsClient": false, "leaseDurationSeconds": 60 }, "status": { "conditions": null, "version": {}, "clusterClaims": [ { "name": "id.k8s.io", "value": "3f406177-34b2-4852-88dd-ff2809680336" } ] } } ] } •2025-08-18T00:31:50.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:52.354Z DEBUG emitters/object_emitter.go:281 sending cloudevents: Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster source: hub1 id: datacontenttype: application/json Extensions, extversion: 11.4 Data (binary), { "delete": [ { "id": "2f9c3a64-8d57-4a43-9a70-2f8d4ef67259", "name": "test-mc-1" } ] } 2025-08-18T00:31:52.354Z DEBUG emitters/object_emitter.go:290 sending {"type": "managedcluster", "create": 0, "update": 0, "delete": 1, "resync": 0, "resync_metadata": 0} 2025-08-18T00:31:52.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedcluster"} empty cluster: Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster source: hub1 id: 40b48431-15f5-418f-9eac-1f617b1993de time: 2025-08-18T00:31:52.354197795Z datacontenttype: application/json Extensions, extversion: 11.4 Data (binary), { "delete": [ { "id": "2f9c3a64-8d57-4a43-9a70-2f8d4ef67259", "name": "test-mc-1" } ] } •2025-08-18T00:31:52.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:52.358Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.decisionStrategy" 2025-08-18T00:31:52.359Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.spreadPolicy" 2025-08-18T00:31:52.359Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "status.decisionGroups" 2025-08-18T00:31:52.361Z DEBUG generic/generic_handler.go:42 update bundle by object: &{{Placement cluster.open-cluster-management.io/v1beta1} {test-globalplacement-1 default 6fb4a138-2177-417e-ba6d-d7ffd43f2318 398 1 2025-08-18 00:31:52 +0000 UTC map[] map[global-hub.open-cluster-management.io/origin-ownerreference-uid:test-globalplacement-uid] [] [] []} {[] [] {Additive []} {[]} [] {{[] {0 0 }}}} {0 [] []}} 2025-08-18T00:31:52.364Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.decisionStrategy" 2025-08-18T00:31:52.364Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.spreadPolicy" 2025-08-18T00:31:52.364Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "status.decisionGroups" 2025-08-18T00:31:52.365Z DEBUG generic/generic_handler.go:42 update bundle by object: &{{Placement cluster.open-cluster-management.io/v1beta1} {test-globalplacement-1 default 6fb4a138-2177-417e-ba6d-d7ffd43f2318 399 1 2025-08-18 00:31:52 +0000 UTC map[] map[global-hub.open-cluster-management.io/origin-ownerreference-uid:test-globalplacement-uid] [] [] []} {[] [] {Additive []} {[]} [] {{[] {0 0 }}}} {0 [] []}} 2025-08-18T00:31:54.355Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "placement.spec"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.placement.spec source: hub1 id: 3daf7197-7394-42bd-a39f-b4838eb81650 time: 2025-08-18T00:31:54.355409938Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "kind": "Placement", "apiVersion": "cluster.open-cluster-management.io/v1beta1", "metadata": { "name": "test-globalplacement-1", "namespace": "default", "uid": "6fb4a138-2177-417e-ba6d-d7ffd43f2318", "resourceVersion": "399", "generation": 1, "creationTimestamp": "2025-08-18T00:31:52Z", "annotations": { "global-hub.open-cluster-management.io/origin-ownerreference-uid": "test-globalplacement-uid" }, "finalizers": [ "global-hub.open-cluster-management.io/resource-cleanup" ], "managedFields": [ { "manager": "status.test", "operation": "Update", "apiVersion": "cluster.open-cluster-management.io/v1beta1", "time": "2025-08-18T00:31:52Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { ".": {}, "f:global-hub.open-cluster-management.io/origin-ownerreference-uid": {} }, "f:finalizers": { ".": {}, "v:\"global-hub.open-cluster-management.io/resource-cleanup\"": {} } }, "f:spec": { ".": {}, "f:prioritizerPolicy": { ".": {}, "f:mode": {} } } } } ] }, "spec": { "prioritizerPolicy": { "mode": "Additive" }, "spreadPolicy": {}, "decisionStrategy": { "groupStrategy": { "clustersPerDecisionGroup": 0 } } }, "status": { "numberOfSelectedClusters": 0, "decisionGroups": null, "conditions": null } } ] •2025-08-18T00:31:54.356Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:54.358Z DEBUG generic/generic_handler.go:42 update bundle by object: &{{PlacementDecision cluster.open-cluster-management.io/v1beta1} {test-placementdecision-1 default 007d7912-3652-4763-b5d5-b1d4473e7ce5 402 1 2025-08-18 00:31:54 +0000 UTC map[] map[global-hub.open-cluster-management.io/origin-ownerreference-uid:test-globalplacement-decision-uid] [] [] []} {[]}} 2025-08-18T00:31:54.362Z DEBUG generic/generic_handler.go:42 update bundle by object: &{{PlacementDecision cluster.open-cluster-management.io/v1beta1} {test-placementdecision-1 default 007d7912-3652-4763-b5d5-b1d4473e7ce5 403 1 2025-08-18 00:31:54 +0000 UTC map[] map[global-hub.open-cluster-management.io/origin-ownerreference-uid:test-globalplacement-decision-uid] [] [] []} {[]}} 2025-08-18T00:31:56.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "managedhub.heartbeat"} 2025-08-18T00:31:57.354Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "placementdecision"} Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.placementdecision source: hub1 id: b3c2852a-afb1-4328-b8e8-8876ffa4218a time: 2025-08-18T00:31:57.354892079Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "kind": "PlacementDecision", "apiVersion": "cluster.open-cluster-management.io/v1beta1", "metadata": { "name": "test-placementdecision-1", "namespace": "default", "uid": "007d7912-3652-4763-b5d5-b1d4473e7ce5", "resourceVersion": "403", "generation": 1, "creationTimestamp": "2025-08-18T00:31:54Z", "annotations": { "global-hub.open-cluster-management.io/origin-ownerreference-uid": "test-globalplacement-decision-uid" }, "finalizers": [ "global-hub.open-cluster-management.io/resource-cleanup" ], "managedFields": [ { "manager": "status.test", "operation": "Update", "apiVersion": "cluster.open-cluster-management.io/v1beta1", "time": "2025-08-18T00:31:54Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { ".": {}, "f:global-hub.open-cluster-management.io/origin-ownerreference-uid": {} }, "f:finalizers": { ".": {}, "v:\"global-hub.open-cluster-management.io/resource-cleanup\"": {} } } } } ] }, "status": { "decisions": null } } ] •2025-08-18T00:31:57.460Z DEBUG consumer/generic_consumer.go:159 received message {"event.Source": "hub1", "event.Type": "event.managedcluster"} >>>>>>>>>>>>>>>>>>> managed cluster event Context Attributes, specversion: 1.0 type: io.open-cluster-management.operator.multiclusterglobalhubs.event.managedcluster source: hub1 id: 042208a1-0508-4d38-bd9c-ce7b9465cb41 time: 2025-08-18T00:31:57.46078048Z datacontenttype: application/json Extensions, extversion: 0.1 Data, [ { "eventNamespace": "cluster2", "eventName": "cluster2.event.17cd34e8c8b27fdd", "clusterName": "cluster2", "clusterId": "4f406177-34b2-4852-88dd-ff2809680444", "leafHubName": "hub1", "message": "The managed cluster (cluster2) cannot connect to the hub cluster.", "reason": "AvailableUnknown", "reportingController": "registration-controller", "reportingInstance": "registration-controller-cluster-manager-registration-controller-6794cf54d9-j7lgm", "type": "Warning", "createdAt": "2025-08-18T00:31:57Z" } ] •2025-08-18T00:31:57.464Z INFO consumer/generic_consumer.go:179 receiver stopped 2025-08-18T00:31:57.464Z INFO consumer/generic_consumer.go:179 receiver stopped context canceled, exiting... 2025-08-18T00:31:57.464Z INFO consumer/generic_consumer.go:179 receiver stopped 2025-08-18T00:31:57.464Z INFO consumer/generic_consumer.go:179 receiver stopped 2025-08-18T00:31:57.464Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:31:57.464Z INFO consumer/generic_consumer.go:179 receiver stopped 2025-08-18T00:31:57.464Z INFO consumer/generic_consumer.go:179 receiver stopped 2025-08-18T00:31:57.464Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:31:57.464Z INFO generic/periodic_syncer.go:155 Stopping periodic syncer... 2025-08-18T00:31:57.464Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "route", "controllerGroup": "route.openshift.io", "controllerKind": "Route"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "placementdecision", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "PlacementDecision"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "policy.localspec", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "event", "controllerGroup": "", "controllerKind": "Event"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "configmap", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:239 All workers finished {"controller": "route", "controllerGroup": "route.openshift.io", "controllerKind": "Route"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "subscriptionreport", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "SubscriptionReport"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:239 All workers finished {"controller": "policy.localspec", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "clusterversion", "controllerGroup": "config.openshift.io", "controllerKind": "ClusterVersion"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:239 All workers finished {"controller": "configmap", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:239 All workers finished {"controller": "event", "controllerGroup": "", "controllerKind": "Event"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "managedcluster", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedCluster"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:239 All workers finished {"controller": "managedcluster", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedCluster"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:239 All workers finished {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:239 All workers finished {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:239 All workers finished {"controller": "subscriptionreport", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "SubscriptionReport"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:239 All workers finished {"controller": "placementdecision", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "PlacementDecision"} 2025-08-18T00:31:57.464Z INFO controller/controller.go:239 All workers finished {"controller": "clusterversion", "controllerGroup": "config.openshift.io", "controllerKind": "ClusterVersion"} 2025-08-18T00:31:57.464Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:31:57.464742 24978 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ClusterVersion" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:57.464742 24978 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ManagedCluster" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:57.464828 24978 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1beta1.Placement" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:31:57.465Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:31:57.465Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:31:57.465Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager Ran 25 of 26 Specs in 67.646 seconds SUCCESS! -- 25 Passed | 0 Failed | 0 Pending | 1 Skipped --- PASS: TestControllers (67.65s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/agent/status 67.671s failed to get CustomResourceDefinition for subscriptionreports.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptionreports.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-7m89ydg2:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for subscriptions.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptions.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-7m89ydg2:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for policies.policy.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "policies.policy.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-7m89ydg2:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope=== RUN TestNonK8sAPI Running Suite: NonK8s API Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/api ==================================================================================================================== Random Seed: 1755477050 Will run 6 of 6 specs The files belonging to this database system will be owned by user "1002610000". This user must also own the server process. The database cluster will be initialized with locale "C". The default database encoding has accordingly been set to "SQL_ASCII". The default text search configuration will be set to "english". Data page checksums are disabled. creating directory /tmp/tmp/embedded-postgres-go-1465/extracted/data ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... UTC creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok Success. You can now start the database server using: /tmp/tmp/embedded-postgres-go-1465/extracted/bin/pg_ctl -D /tmp/tmp/embedded-postgres-go-1465/extracted/data -l logfile start waiting for server to start....2025-08-18 00:30:54.079 UTC [25403] LOG: starting PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit 2025-08-18 00:30:54.080 UTC [25403] LOG: listening on IPv6 address "::1", port 1465 2025-08-18 00:30:54.080 UTC [25403] LOG: listening on IPv4 address "127.0.0.1", port 1465 2025-08-18 00:30:54.080 UTC [25403] LOG: listening on Unix socket "/tmp/.s.PGSQL.1465" 2025-08-18 00:30:54.082 UTC [25406] LOG: database system was shut down at 2025-08-18 00:30:54 UTC 2025-08-18 00:30:54.085 UTC [25403] LOG: database system is ready to accept connections done server started script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. script 1.upgrade.sql executed successfully. script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) failed to get CustomResourceDefinition for managedclusters.cluster.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "managedclusters.cluster.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-7m89ydg2:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope[GIN-debug] GET /global-hub-api/v1/managedclusters --> github.com/stolostron/multicluster-global-hub/manager/pkg/restapis/managedclusters.ListManagedClusters.func1 (4 handlers) [GIN-debug] PATCH /global-hub-api/v1/managedcluster/:clusterID --> github.com/stolostron/multicluster-global-hub/manager/pkg/restapis.SetupRouter.PatchManagedCluster.func2 (4 handlers) [GIN-debug] GET /global-hub-api/v1/policies --> github.com/stolostron/multicluster-global-hub/manager/pkg/restapis.SetupRouter.ListPolicies.func3 (4 handlers) [GIN-debug] GET /global-hub-api/v1/policy/:policyID/status --> github.com/stolostron/multicluster-global-hub/manager/pkg/restapis.SetupRouter.GetPolicyStatus.func4 (4 handlers) [GIN-debug] GET /global-hub-api/v1/subscriptions --> github.com/stolostron/multicluster-global-hub/manager/pkg/restapis.SetupRouter.ListSubscriptions.func5 (4 handlers) [GIN-debug] GET /global-hub-api/v1/subscriptionreport/:subscriptionID --> github.com/stolostron/multicluster-global-hub/manager/pkg/restapis.SetupRouter.GetSubscriptionReport.func6 (4 handlers) got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned managed cluster name: , last returned managed cluster UID: 00000000-0000-0000-0000-000000000000 managedcluster list query: SELECT payload FROM status.managed_clusters WHERE deleted_at is NULL AND (payload -> 'metadata' ->> 'name', cluster_id) > ('', '00000000-0000-0000-0000-000000000000') ORDER BY (payload -> 'metadata' ->> 'name', cluster_id) [GIN] 2025/08/18 - 00:30:54 | 200 | 2.628472ms | | GET "/global-hub-api/v1/managedclusters" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned managed cluster name: , last returned managed cluster UID: 00000000-0000-0000-0000-000000000000 managedcluster list query: SELECT payload FROM status.managed_clusters WHERE deleted_at is NULL AND (payload -> 'metadata' ->> 'name', cluster_id) > ('', '00000000-0000-0000-0000-000000000000') ORDER BY (payload -> 'metadata' ->> 'name', cluster_id) [GIN] 2025/08/18 - 00:30:54 | 200 | 1.263745ms | | GET "/global-hub-api/v1/managedclusters" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned managed cluster name: , last returned managed cluster UID: 00000000-0000-0000-0000-000000000000 managedcluster list query: SELECT payload FROM status.managed_clusters WHERE deleted_at is NULL AND (payload -> 'metadata' ->> 'name', cluster_id) > ('', '00000000-0000-0000-0000-000000000000') ORDER BY (payload -> 'metadata' ->> 'name', cluster_id) [GIN] 2025/08/18 - 00:30:54 | 200 | 1.317356ms | | GET "/global-hub-api/v1/managedclusters?continue=eyJsYXN0TmFtZSI6IiIsImxhc3RVSUQiOiIwMDAwMDAwMC0wMDAwLTAwMDAtMDAwMC0wMDAwMDAwMDAwMDAifQ" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: AND payload -> 'metadata' -> 'labels' @> '{"cloud": "Other"}' AND NOT (payload -> 'metadata' -> 'labels' @> '{"vendor": "Openshift"}') AND NOT (payload -> 'metadata' -> 'labels' ? 'testnokey') AND payload -> 'metadata' -> 'labels' ? 'vendor' limit: 2 last returned managed cluster name: , last returned managed cluster UID: 00000000-0000-0000-0000-000000000000 managedcluster list query: SELECT payload FROM status.managed_clusters WHERE deleted_at is NULL AND (payload -> 'metadata' ->> 'name', cluster_id) > ('', '00000000-0000-0000-0000-000000000000') AND payload -> 'metadata' -> 'labels' @> '{"cloud": "Other"}' AND NOT (payload -> 'metadata' -> 'labels' @> '{"vendor": "Openshift"}') AND NOT (payload -> 'metadata' -> 'labels' ? 'testnokey') AND payload -> 'metadata' -> 'labels' ? 'vendor' ORDER BY (payload -> 'metadata' ->> 'name', cluster_id) LIMIT 2 [GIN] 2025/08/18 - 00:30:54 | 200 | 1.069841ms | | GET "/global-hub-api/v1/managedclusters?limit=2&labelSelector=cloud%3DOther%2Cvendor%21%3DOpenshift%2C%21testnokey%2Cvendor" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned managed cluster name: , last returned managed cluster UID: 00000000-0000-0000-0000-000000000000 managedcluster list query: SELECT payload FROM status.managed_clusters WHERE deleted_at is NULL AND (payload -> 'metadata' ->> 'name', cluster_id) > ('', '00000000-0000-0000-0000-000000000000') ORDER BY (payload -> 'metadata' ->> 'name', cluster_id) Returning as table... [GIN] 2025/08/18 - 00:30:54 | 200 | 1.167033ms | | GET "/global-hub-api/v1/managedclusters" MCL Table {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names","priority":0},{"name":"Age","type":"date","format":"","description":"Custom resource definition column (in JSONPath format): .metadata.creationTimestamp","priority":0}],"rows":[{"cells":["mc1",null],"object":{"apiVersion":"cluster.open-cluster-management.io/v1","kind":"ManagedCluster","metadata":{"annotations":{"global-hub.open-cluster-management.io/managed-by":"hub1","open-cluster-management/created-via":"other"},"creationTimestamp":null,"labels":{"cloud":"Other","vendor":"Other"},"name":"mc1","uid":"2aa5547c-c172-47ed-b70b-db468c84d327"},"spec":{"hubAcceptsClient":true,"leaseDurationSeconds":60},"status":{"conditions":null,"version":{}}}},{"cells":["mc2",null],"object":{"apiVersion":"cluster.open-cluster-management.io/v1","kind":"ManagedCluster","metadata":{"annotations":{"global-hub.open-cluster-management.io/managed-by":"hub1","open-cluster-management/created-via":"other"},"creationTimestamp":null,"labels":{"cloud":"Other","vendor":"Other"},"name":"mc2","uid":"18c9e13c-4488-4dcd-a5ac-1196093abbc0"},"spec":{"hubAcceptsClient":true,"leaseDurationSeconds":60},"status":{"conditions":null,"version":{}}}}]} got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned managed cluster name: , last returned managed cluster UID: 00000000-0000-0000-0000-000000000000 managedcluster list query: SELECT payload FROM status.managed_clusters WHERE deleted_at is NULL AND (payload -> 'metadata' ->> 'name', cluster_id) > ('', '00000000-0000-0000-0000-000000000000') ORDER BY (payload -> 'metadata' ->> 'name', cluster_id) •[GIN] 2025/08/18 - 00:31:02 | 200 | 8.003988659s | | GET "/global-hub-api/v1/managedclusters?watch" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] patch for cluster with ID: 2aa5547c-c172-47ed-b70b-db468c84d327 patch for managed cluster: mc1 -leaf hub: hub1 labels to add: map[foo:bar] labels to remove: map[] [GIN] 2025/08/18 - 00:31:02 | 200 | 2.854197ms | | PATCH "/global-hub-api/v1/managedcluster/2aa5547c-c172-47ed-b70b-db468c84d327" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] patch for cluster with ID: 2aa5547c-c172-47ed-b70b-db468c84d327 patch for managed cluster: mc1 -leaf hub: hub1 labels to add: map[foo:test] labels to remove: map[] [GIN] 2025/08/18 - 00:31:02 | 200 | 1.537894ms | | PATCH "/global-hub-api/v1/managedcluster/2aa5547c-c172-47ed-b70b-db468c84d327" •got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned policy name: , last returned policy] UID: last policy query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') DESC LIMIT 1 policy list query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') policy compliance query with policy ID: SELECT cluster_name,leaf_hub_name,compliance FROM status.compliance WHERE policy_id = ? ORDER BY leaf_hub_name, cluster_name policy&placementbinding&placementrule mapping query: SELECT p.payload -> 'metadata' ->> 'name' AS policy, pb.payload -> 'metadata' ->> 'name' AS binding, pr.payload -> 'metadata' ->> 'name' AS placementrule FROM spec.policies p INNER JOIN spec.placementbindings pb ON p.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pb.payload -> 'subjects' @> json_build_array(json_build_object( 'name', p.payload -> 'metadata' ->> 'name', 'kind', p.payload ->> 'kind', 'apiGroup', split_part(p.payload ->> 'apiVersion', '/',1) ))::jsonb INNER JOIN spec.placementrules pr ON pr.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pr.payload -> 'metadata' ->> 'name' = pb.payload -> 'placementRef' ->> 'name' AND pr.payload ->> 'kind' = pb.payload -> 'placementRef' ->> 'kind' AND split_part(pr.payload ->> 'apiVersion', '/', 1) = pb.payload -> 'placementRef' ->> 'apiGroup' [GIN] 2025/08/18 - 00:31:02 | 200 | 2.10777ms | | GET "/global-hub-api/v1/policies" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned policy name: , last returned policy] UID: last policy query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') DESC LIMIT 1 policy list query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') policy compliance query with policy ID: SELECT cluster_name,leaf_hub_name,compliance FROM status.compliance WHERE policy_id = ? ORDER BY leaf_hub_name, cluster_name policy&placementbinding&placementrule mapping query: SELECT p.payload -> 'metadata' ->> 'name' AS policy, pb.payload -> 'metadata' ->> 'name' AS binding, pr.payload -> 'metadata' ->> 'name' AS placementrule FROM spec.policies p INNER JOIN spec.placementbindings pb ON p.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pb.payload -> 'subjects' @> json_build_array(json_build_object( 'name', p.payload -> 'metadata' ->> 'name', 'kind', p.payload ->> 'kind', 'apiGroup', split_part(p.payload ->> 'apiVersion', '/',1) ))::jsonb INNER JOIN spec.placementrules pr ON pr.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pr.payload -> 'metadata' ->> 'name' = pb.payload -> 'placementRef' ->> 'name' AND pr.payload ->> 'kind' = pb.payload -> 'placementRef' ->> 'kind' AND split_part(pr.payload ->> 'apiVersion', '/', 1) = pb.payload -> 'placementRef' ->> 'apiGroup' [GIN] 2025/08/18 - 00:31:02 | 200 | 16.052933ms | | GET "/global-hub-api/v1/policies" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: AND payload -> 'metadata' -> 'labels' @> '{"foo": "bar"}' AND NOT (payload -> 'metadata' -> 'labels' @> '{"env": "dev"}') AND NOT (payload -> 'metadata' -> 'labels' ? 'testnokey') AND payload -> 'metadata' -> 'labels' ? 'foo' limit: last returned policy name: , last returned policy] UID: last policy query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') DESC LIMIT 1 policy list query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') AND payload -> 'metadata' -> 'labels' @> '{"foo": "bar"}' AND NOT (payload -> 'metadata' -> 'labels' @> '{"env": "dev"}') AND NOT (payload -> 'metadata' -> 'labels' ? 'testnokey') AND payload -> 'metadata' -> 'labels' ? 'foo' ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') policy compliance query with policy ID: SELECT cluster_name,leaf_hub_name,compliance FROM status.compliance WHERE policy_id = ? ORDER BY leaf_hub_name, cluster_name policy&placementbinding&placementrule mapping query: SELECT p.payload -> 'metadata' ->> 'name' AS policy, pb.payload -> 'metadata' ->> 'name' AS binding, pr.payload -> 'metadata' ->> 'name' AS placementrule FROM spec.policies p INNER JOIN spec.placementbindings pb ON p.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pb.payload -> 'subjects' @> json_build_array(json_build_object( 'name', p.payload -> 'metadata' ->> 'name', 'kind', p.payload ->> 'kind', 'apiGroup', split_part(p.payload ->> 'apiVersion', '/',1) ))::jsonb INNER JOIN spec.placementrules pr ON pr.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pr.payload -> 'metadata' ->> 'name' = pb.payload -> 'placementRef' ->> 'name' AND pr.payload ->> 'kind' = pb.payload -> 'placementRef' ->> 'kind' AND split_part(pr.payload ->> 'apiVersion', '/', 1) = pb.payload -> 'placementRef' ->> 'apiGroup' [GIN] 2025/08/18 - 00:31:02 | 200 | 2.613425ms | | GET "/global-hub-api/v1/policies?labelSelector=foo%3Dbar%2Cenv%21%3Ddev%2C%21testnokey%2Cfoo" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned policy name: , last returned policy] UID: last policy query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') DESC LIMIT 1 policy list query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') policy compliance query with policy ID: SELECT cluster_name,leaf_hub_name,compliance FROM status.compliance WHERE policy_id = ? ORDER BY leaf_hub_name, cluster_name policy&placementbinding&placementrule mapping query: SELECT p.payload -> 'metadata' ->> 'name' AS policy, pb.payload -> 'metadata' ->> 'name' AS binding, pr.payload -> 'metadata' ->> 'name' AS placementrule FROM spec.policies p INNER JOIN spec.placementbindings pb ON p.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pb.payload -> 'subjects' @> json_build_array(json_build_object( 'name', p.payload -> 'metadata' ->> 'name', 'kind', p.payload ->> 'kind', 'apiGroup', split_part(p.payload ->> 'apiVersion', '/',1) ))::jsonb INNER JOIN spec.placementrules pr ON pr.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pr.payload -> 'metadata' ->> 'name' = pb.payload -> 'placementRef' ->> 'name' AND pr.payload ->> 'kind' = pb.payload -> 'placementRef' ->> 'kind' AND split_part(pr.payload ->> 'apiVersion', '/', 1) = pb.payload -> 'placementRef' ->> 'apiGroup' Returning as table... [GIN] 2025/08/18 - 00:31:02 | 200 | 2.552074ms | | GET "/global-hub-api/v1/policies" Policy Table {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names","priority":0},{"name":"Age","type":"date","format":"","description":"Custom resource definition column (in JSONPath format): .metadata.creationTimestamp","priority":0}],"rows":[{"cells":["policy-config-audit",null],"object":{"apiVersion":"policy.open-cluster-management.io/v1","kind":"Policy","metadata":{"annotations":{"policy.open-cluster-management.io/categories":"AU Audit and Accountability","policy.open-cluster-management.io/controls":"AU-3 Content of Audit Records","policy.open-cluster-management.io/standards":"NIST SP 800-53"},"creationTimestamp":null,"labels":{"env":"production","foo":"bar"},"name":"policy-config-audit","namespace":"default"},"spec":{"disabled":false,"policy-templates":[{"objectDefinition":{"apiVersion":"policy.open-cluster-management.io/v1","kind":"ConfigurationPolicy","metadata":{"name":"policy-config-audit"},"spec":{"object-templates":[{"complianceType":"musthave","objectDefinition":{"apiVersion":"config.openshift.io/v1","kind":"APIServer","metadata":{"name":"cluster"},"spec":{"audit":{"customRules":[{"group":"system:authenticated:oauth","profile":"WriteRequestBodies"},{"group":"system:authenticated","profile":"AllRequestBodies"}]},"profile":"Default"}}}],"remediationAction":"inform","severity":"low"}}}],"remediationAction":"inform"},"status":{"compliant":"NonCompliant","placement":[{"placementBinding":"binding-config-audit","placementRule":"placement-config-audit"}],"status":[{"clustername":"mc1","clusternamespace":"mc1","compliant":"NonCompliant"},{"clustername":"mc2","clusternamespace":"mc2","compliant":"Compliant"}],"summary":{"complianceClusterNumber":1,"nonComplianceClusterNumber":1}}}}]} got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned policy name: , last returned policy] UID: last policy query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') DESC LIMIT 1 policy list query: SELECT id, payload FROM spec.policies WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') policy compliance query with policy ID: SELECT cluster_name,leaf_hub_name,compliance FROM status.compliance WHERE policy_id = ? ORDER BY leaf_hub_name, cluster_name policy&placementbinding&placementrule mapping query: SELECT p.payload -> 'metadata' ->> 'name' AS policy, pb.payload -> 'metadata' ->> 'name' AS binding, pr.payload -> 'metadata' ->> 'name' AS placementrule FROM spec.policies p INNER JOIN spec.placementbindings pb ON p.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pb.payload -> 'subjects' @> json_build_array(json_build_object( 'name', p.payload -> 'metadata' ->> 'name', 'kind', p.payload ->> 'kind', 'apiGroup', split_part(p.payload ->> 'apiVersion', '/',1) ))::jsonb INNER JOIN spec.placementrules pr ON pr.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pr.payload -> 'metadata' ->> 'name' = pb.payload -> 'placementRef' ->> 'name' AND pr.payload ->> 'kind' = pb.payload -> 'placementRef' ->> 'kind' AND split_part(pr.payload ->> 'apiVersion', '/', 1) = pb.payload -> 'placementRef' ->> 'apiGroup' •got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] getting status for policy: 27125dca-f0b7-4850-ac48-8621bf134965 policy query with policy ID: SELECT payload FROM spec.policies WHERE deleted = FALSE AND id = ? policy compliance query with policy ID: SELECT cluster_name,leaf_hub_name,compliance FROM status.compliance WHERE policy_id = ? ORDER BY leaf_hub_name, cluster_name policy&placementbinding&placementrule mapping query: SELECT p.payload -> 'metadata' ->> 'name' AS policy, pb.payload -> 'metadata' ->> 'name' AS binding, pr.payload -> 'metadata' ->> 'name' AS placementrule FROM spec.policies p INNER JOIN spec.placementbindings pb ON p.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pb.payload -> 'subjects' @> json_build_array(json_build_object( 'name', p.payload -> 'metadata' ->> 'name', 'kind', p.payload ->> 'kind', 'apiGroup', split_part(p.payload ->> 'apiVersion', '/',1) ))::jsonb INNER JOIN spec.placementrules pr ON pr.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pr.payload -> 'metadata' ->> 'name' = pb.payload -> 'placementRef' ->> 'name' AND pr.payload ->> 'kind' = pb.payload -> 'placementRef' ->> 'kind' AND split_part(pr.payload ->> 'apiVersion', '/', 1) = pb.payload -> 'placementRef' ->> 'apiGroup' [GIN] 2025/08/18 - 00:31:10 | 200 | 4.581542ms | | GET "/global-hub-api/v1/policy/27125dca-f0b7-4850-ac48-8621bf134965/status" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] getting status for policy: 27125dca-f0b7-4850-ac48-8621bf134965 policy query with policy ID: SELECT payload FROM spec.policies WHERE deleted = FALSE AND id = ? policy compliance query with policy ID: SELECT cluster_name,leaf_hub_name,compliance FROM status.compliance WHERE policy_id = ? ORDER BY leaf_hub_name, cluster_name policy&placementbinding&placementrule mapping query: SELECT p.payload -> 'metadata' ->> 'name' AS policy, pb.payload -> 'metadata' ->> 'name' AS binding, pr.payload -> 'metadata' ->> 'name' AS placementrule FROM spec.policies p INNER JOIN spec.placementbindings pb ON p.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pb.payload -> 'subjects' @> json_build_array(json_build_object( 'name', p.payload -> 'metadata' ->> 'name', 'kind', p.payload ->> 'kind', 'apiGroup', split_part(p.payload ->> 'apiVersion', '/',1) ))::jsonb INNER JOIN spec.placementrules pr ON pr.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pr.payload -> 'metadata' ->> 'name' = pb.payload -> 'placementRef' ->> 'name' AND pr.payload ->> 'kind' = pb.payload -> 'placementRef' ->> 'kind' AND split_part(pr.payload ->> 'apiVersion', '/', 1) = pb.payload -> 'placementRef' ->> 'apiGroup' returning policy as table... [GIN] 2025/08/18 - 00:31:10 | 200 | 2.380868ms | | GET "/global-hub-api/v1/policy/27125dca-f0b7-4850-ac48-8621bf134965/status" Single Policy Table {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names","priority":0},{"name":"Age","type":"date","format":"","description":"Custom resource definition column (in JSONPath format): .metadata.creationTimestamp","priority":0}],"rows":[{"cells":["policy-config-audit",null],"object":{"apiVersion":"policy.open-cluster-management.io/v1","kind":"Policy","metadata":{"annotations":{"policy.open-cluster-management.io/categories":"AU Audit and Accountability","policy.open-cluster-management.io/controls":"AU-3 Content of Audit Records","policy.open-cluster-management.io/standards":"NIST SP 800-53"},"creationTimestamp":null,"labels":{"env":"production","foo":"bar"},"name":"policy-config-audit","namespace":"default"},"status":{"compliant":"NonCompliant","placement":[{"placementBinding":"binding-config-audit","placementRule":"placement-config-audit"}],"status":[{"clustername":"mc1","clusternamespace":"mc1","compliant":"NonCompliant"},{"clustername":"mc2","clusternamespace":"mc2","compliant":"Compliant"}],"summary":{"complianceClusterNumber":1,"nonComplianceClusterNumber":1}}}}]} got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] getting status for policy: 27125dca-f0b7-4850-ac48-8621bf134965 policy query with policy ID: SELECT payload FROM spec.policies WHERE deleted = FALSE AND id = ? policy compliance query with policy ID: SELECT cluster_name,leaf_hub_name,compliance FROM status.compliance WHERE policy_id = ? ORDER BY leaf_hub_name, cluster_name policy&placementbinding&placementrule mapping query: SELECT p.payload -> 'metadata' ->> 'name' AS policy, pb.payload -> 'metadata' ->> 'name' AS binding, pr.payload -> 'metadata' ->> 'name' AS placementrule FROM spec.policies p INNER JOIN spec.placementbindings pb ON p.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pb.payload -> 'subjects' @> json_build_array(json_build_object( 'name', p.payload -> 'metadata' ->> 'name', 'kind', p.payload ->> 'kind', 'apiGroup', split_part(p.payload ->> 'apiVersion', '/',1) ))::jsonb INNER JOIN spec.placementrules pr ON pr.payload -> 'metadata' ->> 'namespace' = pb.payload -> 'metadata' ->> 'namespace' AND pr.payload -> 'metadata' ->> 'name' = pb.payload -> 'placementRef' ->> 'name' AND pr.payload ->> 'kind' = pb.payload -> 'placementRef' ->> 'kind' AND split_part(pr.payload ->> 'apiVersion', '/', 1) = pb.payload -> 'placementRef' ->> 'apiGroup' [GIN] 2025/08/18 - 00:31:10 | 200 | 8.021210552s | | GET "/global-hub-api/v1/policies?watch" •got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned subscription name: , last returned subscription UID: subscription list query: SELECT payload FROM spec.subscriptions WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') [GIN] 2025/08/18 - 00:31:18 | 200 | 8.005501844s | | GET "/global-hub-api/v1/policy/27125dca-f0b7-4850-ac48-8621bf134965/status?watch" [GIN] 2025/08/18 - 00:31:18 | 200 | 4.827138ms | | GET "/global-hub-api/v1/subscriptions" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned subscription name: , last returned subscription UID: subscription list query: SELECT payload FROM spec.subscriptions WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') [GIN] 2025/08/18 - 00:31:18 | 200 | 1.595752ms | | GET "/global-hub-api/v1/subscriptions" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: AND payload -> 'metadata' -> 'labels' @> '{"app": "foo"}' AND NOT (payload -> 'metadata' -> 'labels' @> '{"env": "dev"}') AND NOT (payload -> 'metadata' -> 'labels' ? 'testnokey') limit: last returned subscription name: , last returned subscription UID: subscription list query: SELECT payload FROM spec.subscriptions WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') AND payload -> 'metadata' -> 'labels' @> '{"app": "foo"}' AND NOT (payload -> 'metadata' -> 'labels' @> '{"env": "dev"}') AND NOT (payload -> 'metadata' -> 'labels' ? 'testnokey') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') [GIN] 2025/08/18 - 00:31:18 | 200 | 1.320147ms | | GET "/global-hub-api/v1/subscriptions?labelSelector=app%3Dfoo%2Cenv%21%3Ddev%2C%21testnokey" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned subscription name: , last returned subscription UID: subscription list query: SELECT payload FROM spec.subscriptions WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') Returning as table... [GIN] 2025/08/18 - 00:31:18 | 200 | 1.875808ms | | GET "/global-hub-api/v1/subscriptions" Subs Table {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names","priority":0},{"name":"Age","type":"date","format":"","description":"Custom resource definition column (in JSONPath format): .metadata.creationTimestamp","priority":0}],"rows":[{"cells":["bar-appsub",null],"object":{"apiVersion":"apps.open-cluster-management.io/v1","kind":"Subscription","metadata":{"annotations":{"apps.open-cluster-management.io/git-branch":"main","apps.open-cluster-management.io/git-path":"bar","apps.open-cluster-management.io/reconcile-option":"merge"},"creationTimestamp":null,"labels":{"app":"bar","app.kubernetes.io/part-of":"bar","apps.open-cluster-management.io/reconcile-rate":"medium"},"name":"bar-appsub","namespace":"bar"},"spec":{"channel":"git-application-samples-ns/git-application-samples","placement":{"placementRef":{"kind":"PlacementRule","name":"bar-placement"}}},"status":{"ansiblejobs":{},"lastUpdateTime":null}}},{"cells":["foo-appsub",null],"object":{"apiVersion":"apps.open-cluster-management.io/v1","kind":"Subscription","metadata":{"annotations":{"apps.open-cluster-management.io/git-branch":"main","apps.open-cluster-management.io/git-path":"foo","apps.open-cluster-management.io/reconcile-option":"merge"},"creationTimestamp":null,"labels":{"app":"foo","app.kubernetes.io/part-of":"foo","apps.open-cluster-management.io/reconcile-rate":"medium"},"name":"foo-appsub","namespace":"foo"},"spec":{"channel":"git-application-samples-ns/git-application-samples","placement":{"placementRef":{"kind":"PlacementRule","name":"foo-placement"}}},"status":{"ansiblejobs":{},"lastUpdateTime":null}}}]} got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] parsed selector: limit: last returned subscription name: , last returned subscription UID: subscription list query: SELECT payload FROM spec.subscriptions WHERE deleted = FALSE AND (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') > ('', '') ORDER BY (payload -> 'metadata' ->> 'name', payload -> 'metadata' ->> 'uid') •[GIN] 2025/08/18 - 00:31:26 | 200 | 8.012660568s | | GET "/global-hub-api/v1/subscriptions?watch" got authenticated user: kube:admin user groups: [system:authenticated system:cluster-admins] getting subscription report for subscription: 812789e3-54ff-4a77-89af-7c37b0f7f60c subscription query with subscription ID: SELECT payload->'metadata'->>'name', payload->'metadata'->>'namespace' FROM spec.subscriptions WHERE deleted = FALSE AND id = ? subscription report query with subscription name and namespace: SELECT payload FROM status.subscription_reports WHERE payload->'metadata'->>'name'= ? AND payload->'metadata'->>'namespace' = ? [GIN] 2025/08/18 - 00:31:26 | 200 | 1.793316ms | | GET "/global-hub-api/v1/subscriptionreport/812789e3-54ff-4a77-89af-7c37b0f7f60c" •2025-08-18 00:31:26.499 UTC [25403] LOG: received fast shutdown request 2025-08-18 00:31:26.500 UTC [25403] LOG: aborting any active transactions 2025-08-18 00:31:26.503 UTC [25403] LOG: background worker "logical replication launcher" (PID 25409) exited with exit code 1 2025-08-18 00:31:26.503 UTC [25404] LOG: shutting down 2025-08-18 00:31:26.503 UTC [25404] LOG: checkpoint starting: shutdown immediate waiting for server to shut down....2025-08-18 00:31:26.526 UTC [25404] LOG: checkpoint complete: wrote 1041 buffers (6.4%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.020 s, sync=0.003 s, total=0.023 s; sync files=481, longest=0.001 s, average=0.001 s; distance=5321 kB, estimate=5321 kB; lsn=0/1A10E78, redo lsn=0/1A10E78 2025-08-18 00:31:26.535 UTC [25403] LOG: database system is shut down done server stopped Ran 6 of 6 Specs in 35.682 seconds SUCCESS! -- 6 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestNonK8sAPI (35.68s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/manager/api 35.728s failed to get CustomResourceDefinition for subscriptionreports.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptionreports.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-7m89ydg2:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for subscriptions.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptions.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-7m89ydg2:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for policies.policy.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "policies.policy.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-7m89ydg2:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope=== RUN TestController Running Suite: Manager Controller Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/controller =================================================================================================================================== Random Seed: 1755477050 Will run 12 of 12 specs The files belonging to this database system will be owned by user "1002610000". This user must also own the server process. The database cluster will be initialized with locale "C". The default database encoding has accordingly been set to "SQL_ASCII". The default text search configuration will be set to "english". Data page checksums are disabled. creating directory /tmp/tmp/embedded-postgres-go-52675/extracted/data ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... UTC creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok Success. You can now start the database server using: /tmp/tmp/embedded-postgres-go-52675/extracted/bin/pg_ctl -D /tmp/tmp/embedded-postgres-go-52675/extracted/data -l logfile start waiting for server to start....2025-08-18 00:31:00.065 UTC [25462] LOG: starting PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit 2025-08-18 00:31:00.066 UTC [25462] LOG: listening on IPv6 address "::1", port 52675 2025-08-18 00:31:00.066 UTC [25462] LOG: listening on IPv4 address "127.0.0.1", port 52675 2025-08-18 00:31:00.066 UTC [25462] LOG: listening on Unix socket "/tmp/.s.PGSQL.52675" 2025-08-18 00:31:00.067 UTC [25465] LOG: database system was shut down at 2025-08-18 00:30:59 UTC 2025-08-18 00:31:00.070 UTC [25462] LOG: database system is ready to accept connections done server started script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. script 1.upgrade.sql executed successfully. script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. 2025-08-18T00:31:00.398Z INFO controller/controller.go:175 Starting EventSource {"controller": "backupPvcController", "controllerGroup": "", "controllerKind": "PersistentVolumeClaim", "source": "kind source: *v1.PersistentVolumeClaim"} 2025-08-18T00:31:00.398Z INFO controller/controller.go:183 Starting Controller {"controller": "backupPvcController", "controllerGroup": "", "controllerKind": "PersistentVolumeClaim"} 2025-08-18T00:31:00.421Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.overrides.components[0].configOverrides" ••2025-08-18T00:31:00.498Z INFO controller/controller.go:217 Starting workers {"controller": "backupPvcController", "controllerGroup": "", "controllerKind": "PersistentVolumeClaim", "worker count": 1} ••Time max 2025-09-01 00:00:00 +0000 UTC min 2024-02-01 00:00:00 +0000 UTC expiredTime 2024-01-01 00:00:00 +0000 UTC the expired partition table is created: event.local_policies_2024_01 the expired partition table is created: event.local_root_policies_2024_01 the expired partition table is created: history.local_compliance_2024_01 the expired partition table is created: event.managed_clusters_2024_01 the min partition table is created: event.local_policies_2024_02 the min partition table is created: event.local_root_policies_2024_02 the min partition table is created: history.local_compliance_2024_02 the min partition table is created: event.managed_clusters_2024_02 the deleted record is created: status.managed_clusters the deleted record is created: status.leaf_hubs the deleted record is created: local_spec.policies deleting the expired partition table: event.local_policies_2024_01 deleting the expired partition table: event.local_root_policies_2024_01 deleting the expired partition table: history.local_compliance_2024_01 deleting the expired partition table: event.managed_clusters_2024_01 2025-08-18T00:31:05.605Z INFO data-retention task/data_retention.go:115 create partition tabletableevent.local_policies_2025_09start2025-09-01end2025-10-01 2025-08-18T00:31:05.619Z INFO data-retention task/data_retention.go:124 delete partition tabletableevent.local_policies_2024_01 2025-08-18T00:31:05.626Z INFO data-retention task/data_retention.go:115 create partition tabletableevent.local_root_policies_2025_09start2025-09-01end2025-10-01 2025-08-18T00:31:05.638Z INFO data-retention task/data_retention.go:124 delete partition tabletableevent.local_root_policies_2024_01 2025-08-18T00:31:05.643Z INFO data-retention task/data_retention.go:115 create partition tabletablehistory.local_compliance_2025_09start2025-09-01end2025-10-01 2025-08-18T00:31:05.648Z INFO data-retention task/data_retention.go:124 delete partition tabletablehistory.local_compliance_2024_01 2025-08-18T00:31:05.653Z INFO data-retention task/data_retention.go:115 create partition tabletableevent.managed_clusters_2025_09start2025-09-01end2025-10-01 2025-08-18T00:31:05.660Z INFO data-retention task/data_retention.go:124 delete partition tabletableevent.managed_clusters_2024_01 2025-08-18T00:31:05.663Z INFO data-retention task/data_retention.go:135 delete recordstablestatus.managed_clustersbefore2024-02-01 2025-08-18T00:31:05.665Z INFO data-retention task/data_retention.go:135 delete recordstablestatus.leaf_hubsbefore2024-02-01 2025-08-18T00:31:05.666Z INFO data-retention task/data_retention.go:135 delete recordstablelocal_spec.policiesbefore2024-02-01 2025-08-18T00:31:05.667Z INFO data-retention task/data_retention.go:99 finish runningnextRun2025-08-25 00:00:00 deleting the expired record in table: status.managed_clusters deleting the expired record in table: status.leaf_hubs deleting the expired record in table: local_spec.policies •Time Min 2024_02 Max 2025_09 table_name(event.local_policies) | min(local_policies_2024_02) | max(local_policies_2025_09) | min_deletion(0001-01-01) table_name(event.local_root_policies) | min(local_root_policies_2024_02) | max(local_root_policies_2025_09) | min_deletion(0001-01-01) table_name(history.local_compliance) | min(local_compliance_2024_02) | max(local_compliance_2025_09) | min_deletion(0001-01-01) table_name(event.managed_clusters) | min(managed_clusters_2024_02) | max(managed_clusters_2025_09) | min_deletion(0001-01-01) table_name(status.managed_clusters) | min() | max() | min_deletion(0001-01-01) table_name(status.leaf_hubs) | min() | max() | min_deletion(0001-01-01) table_name(local_spec.policies) | min() | max() | min_deletion(0001-01-01) •2025-08-18T00:31:06.599Z INFO cronjob/scheduler.go:66 set SyncLocalCompliance job {"scheduleAt": "00:00"} 2025-08-18T00:31:06.599Z INFO cronjob/scheduler.go:75 set DataRetention jobscheduleAt00:00 2025-08-18T00:31:06.599Z INFO cronjob/scheduler.go:103 launch the job {"name": "data-retention"} 2025-08-18T00:31:06.599Z INFO cronjob/scheduler.go:108 failed to launch the unknow job immediately {"name": "local-compliance-history"} 2025-08-18T00:31:06.599Z INFO cronjob/scheduler.go:108 failed to launch the unknow job immediately {"name": "unexpected_name"} •2025-08-18T00:31:06.599Z INFO cronjob/scheduler.go:86 start job scheduler 2025-08-18T00:31:06.600Z INFO cronjob/scheduler.go:108 failed to launch the unknow job immediately {"name": ""} 2025-08-18T00:31:06.600Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "0001-01-01 00:00:00"} 2025-08-18T00:31:06.600Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 0} 2025-08-18T00:31:06.601Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:06.601Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:07"} heartbeat-hub01 2025-08-18 00:31:06.599784 +0000 +0000 active heartbeat-hub02 2025-08-18 00:29:06.599784 +0000 +0000 active heartbeat-hub03 2025-08-18 00:30:46.599784 +0000 +0000 active heartbeat-hub04 2025-08-18 00:28:06.599784 +0000 +0000 inactive >> heartbeat: heartbeat-hub04 heartbeat-hub04 2025-08-18 00:30:06.599783602 +0000 UTC m=-44.308429625 inactive 2025-08-18T00:31:06.603Z INFO hubmanagement/hub_management.go:83 hub management status switch frequency {"interval": "1s"} 2025-08-18T00:31:07.600Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:07"} 2025-08-18T00:31:07.601Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 0} 2025-08-18T00:31:07.601Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:07.601Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:08"} 2025-08-18T00:31:08.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:08"} 2025-08-18T00:31:08.601Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 0} 2025-08-18T00:31:08.601Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:08.601Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:09"} 2025-08-18T00:31:09.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:09"} 2025-08-18T00:31:09.602Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 0} 2025-08-18T00:31:09.602Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:09.602Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:10"} >> hub management[90s]: heartbeat-hub02 -> inactive, heartbeat-hub04 -> active heartbeat-hub01 2025-08-18 00:31:06.599784 +0000 +0000 active heartbeat-hub03 2025-08-18 00:30:46.599784 +0000 +0000 active heartbeat-hub02 2025-08-18 00:29:06.599784 +0000 +0000 inactive heartbeat-hub04 2025-08-18 00:30:06.599784 +0000 +0000 active •set local compliance job scheduleAt 00:00 2025-08-18T00:31:09.608Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "0001-01-01 00:00:00"} 2025-08-18T00:31:09.609Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} found the following compliance history: 2025-08-18T00:31:09.610Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 5, "offset": 0} 2025-08-18T00:31:09.612Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 5} 2025-08-18T00:31:09.612Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-19 00:00:00"} 2025-08-18T00:31:10.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:10"} 2025-08-18T00:31:10.601Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:10.602Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:10.603Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:10.603Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:11"} 2025-08-18T00:31:11.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:11"} 2025-08-18T00:31:11.601Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:11.602Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:11.604Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:11.604Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:12"} found the following compliance history: 00000000-0000-0000-0000-000000000001 00000003-0000-0000-0000-000000000001 compliant 2025-08-18 00:00:00 +0000 +0000 0 00000000-0000-0000-0000-000000000001 00000003-0000-0000-0000-000000000002 compliant 2025-08-18 00:00:00 +0000 +0000 0 00000000-0000-0000-0000-000000000001 00000003-0000-0000-0000-000000000003 compliant 2025-08-18 00:00:00 +0000 +0000 0 00000000-0000-0000-0000-000000000001 00000003-0000-0000-0000-000000000004 compliant 2025-08-18 00:00:00 +0000 +0000 0 00000000-0000-0000-0000-000000000001 00000003-0000-0000-0000-000000000005 compliant 2025-08-18 00:00:00 +0000 +0000 0 found the following compliance history job log: >> 2025-08-18 00:31:09 2025-08-18 00:31:09 local-compliance-history 5 5 0 none >> 2025-08-18 00:31:10 2025-08-18 00:31:10 local-compliance-history 5 0 0 none >> 2025-08-18 00:31:11 2025-08-18 00:31:11 local-compliance-history 5 0 0 none •00000000-0000-0000-0000-000000000001 00000003-0000-0000-0000-000000000001 non_compliant 2025-08-18 00:00:00 +0000 +0000 1 •00000000-0000-0000-0000-000000000001 00000003-0000-0000-0000-000000000001 non_compliant 2025-08-18 00:00:00 +0000 +0000 2 •00000000-0000-0000-0000-000000000001 00000003-0000-0000-0000-000000000001 unknown 2025-08-18 00:00:00 +0000 +0000 3 •2025-08-18T00:31:12.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:12"} 2025-08-18T00:31:12.603Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:12.603Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:12.605Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:12.605Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:13"} 2025-08-18T00:31:13.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:13"} 2025-08-18T00:31:13.602Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:13.602Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:13.604Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:13.604Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:14"} 2025-08-18T00:31:14.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:14"} 2025-08-18T00:31:14.601Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:14.602Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:14.603Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:14.603Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:15"} 2025-08-18T00:31:15.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:15"} 2025-08-18T00:31:15.601Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:15.602Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:15.602Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:15.603Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:16"} 2025-08-18T00:31:16.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:16"} 2025-08-18T00:31:16.602Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:16.602Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:16.604Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:16.604Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:17"} 2025-08-18T00:31:17.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:17"} 2025-08-18T00:31:17.601Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:17.602Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:17.602Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:17.602Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:18"} 2025-08-18T00:31:18.605Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:18"} 2025-08-18T00:31:18.616Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:18.616Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:18.617Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:18.617Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:19"} 2025-08-18T00:31:19.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:19"} 2025-08-18T00:31:19.601Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:19.602Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:19.603Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:19.603Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:20"} 2025-08-18T00:31:20.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:20"} 2025-08-18T00:31:20.602Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:20.606Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:20.610Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:20.610Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:21"} 2025-08-18T00:31:21.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:21"} 2025-08-18T00:31:21.606Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:21.607Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:21.608Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:21.608Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:22"} 2025-08-18T00:31:22.604Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:22"} 2025-08-18T00:31:22.605Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:22.606Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:22.607Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:22.607Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:23"} 2025-08-18T00:31:23.600Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:23"} 2025-08-18T00:31:23.601Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:23.601Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:23.602Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:23.602Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:24"} 2025-08-18T00:31:24.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:24"} 2025-08-18T00:31:24.601Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:24.604Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:24.605Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:24.605Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:25"} 2025-08-18T00:31:25.603Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:25"} 2025-08-18T00:31:25.604Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:25.604Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:25.605Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:25.605Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:26"} 2025-08-18T00:31:26.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:26"} 2025-08-18T00:31:26.601Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:26.602Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:26.602Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:26.602Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:27"} 2025-08-18T00:31:27.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:27"} 2025-08-18T00:31:27.601Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:27.601Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:27.602Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:27.602Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:28"} 2025-08-18T00:31:28.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:28"} 2025-08-18T00:31:28.602Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:28.602Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:28.604Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:28.604Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:29"} 2025-08-18T00:31:29.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:29"} 2025-08-18T00:31:29.601Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:29.602Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:29.602Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:29.602Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:30"} 2025-08-18T00:31:30.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:30"} 2025-08-18T00:31:30.601Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:30.602Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:30.602Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:30.602Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:31"} 2025-08-18T00:31:31.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:31"} 2025-08-18T00:31:31.602Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:31.602Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:31.604Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:31.604Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:32"} 2025-08-18T00:31:32.600Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:32"} 2025-08-18T00:31:32.601Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:32.601Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:32.602Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:32.602Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:33"} 2025-08-18T00:31:33.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:33"} 2025-08-18T00:31:33.601Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:33.602Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:33.602Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:33.602Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:34"} 2025-08-18T00:31:34.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:34"} 2025-08-18T00:31:34.602Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:34.602Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:34.604Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:34.604Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:35"} 2025-08-18T00:31:35.601Z INFO local-compliance-history task/local_compliance_history.go:36 start running {"date": "2025-08-18", "currentRun": "2025-08-18 00:31:35"} 2025-08-18T00:31:35.602Z INFO local-compliance-history task/local_compliance_history.go:63 The number of compliance need to be synchronized {"date": "2025-08-18", "count": 5} 2025-08-18T00:31:35.602Z INFO local-compliance-history task/local_compliance_history.go:124 sync compliance to history {"date": "2025-08-18", "batch": 1000, "batchInsert": 0, "offset": 0} 2025-08-18T00:31:35.604Z INFO local-compliance-history task/local_compliance_history.go:73 The number of compliance has been synchronized {"date": "2025-08-18", "insertedCount": 0} 2025-08-18T00:31:35.604Z INFO local-compliance-history task/local_compliance_history.go:53 finish running {"date": "2025-08-18", "nextRun": "2025-08-18 00:31:36"} waiting for server to shut down...2025-08-18 00:31:35.663 UTC [25462] LOG: received fast shutdown request .2025-08-18 00:31:35.664 UTC [25462] LOG: aborting any active transactions 2025-08-18 00:31:35.664 UTC [25471] FATAL: terminating connection due to administrator command 2025-08-18 00:31:35.664 UTC [25470] FATAL: terminating connection due to administrator command 2025-08-18 00:31:35.670 UTC [25462] LOG: background worker "logical replication launcher" (PID 25468) exited with exit code 1 2025-08-18 00:31:35.671 UTC [25463] LOG: shutting down 2025-08-18 00:31:35.671 UTC [25463] LOG: checkpoint starting: shutdown immediate 2025-08-18 00:31:35.697 UTC [25463] LOG: checkpoint complete: wrote 1064 buffers (6.5%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.015 s, sync=0.010 s, total=0.027 s; sync files=523, longest=0.007 s, average=0.001 s; distance=5605 kB, estimate=5605 kB; lsn=0/1A57C98, redo lsn=0/1A57C98 2025-08-18 00:31:35.710 UTC [25462] LOG: database system is shut down done server stopped Ran 12 of 12 Specs in 44.824 seconds SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestController (44.82s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/manager/controller 44.867s failed to get CustomResourceDefinition for subscriptionreports.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptionreports.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-7m89ydg2:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for subscriptions.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptions.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-7m89ydg2:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for policies.policy.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "policies.policy.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-7m89ydg2:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope=== RUN TestController Running Suite: Manager Controller Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/migration ================================================================================================================================== Random Seed: 1755477050 Will run 20 of 20 specs The files belonging to this database system will be owned by user "1002610000". This user must also own the server process. The database cluster will be initialized with locale "C". The default database encoding has accordingly been set to "SQL_ASCII". The default text search configuration will be set to "english". Data page checksums are disabled. creating directory /tmp/tmp/embedded-postgres-go-12528/extracted/data ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... UTC creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok Success. You can now start the database server using: /tmp/tmp/embedded-postgres-go-12528/extracted/bin/pg_ctl -D /tmp/tmp/embedded-postgres-go-12528/extracted/data -l logfile start waiting for server to start....2025-08-18 00:30:59.207 UTC [25446] LOG: starting PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit 2025-08-18 00:30:59.208 UTC [25446] LOG: listening on IPv6 address "::1", port 12528 2025-08-18 00:30:59.208 UTC [25446] LOG: listening on IPv4 address "127.0.0.1", port 12528 2025-08-18 00:30:59.208 UTC [25446] LOG: listening on Unix socket "/tmp/.s.PGSQL.12528" 2025-08-18 00:30:59.209 UTC [25449] LOG: database system was shut down at 2025-08-18 00:30:59 UTC 2025-08-18 00:30:59.212 UTC [25446] LOG: database system is ready to accept connections done server started script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. script 1.upgrade.sql executed successfully. script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. 2025-08-18T00:30:59.494Z INFO controller/controller.go:175 Starting EventSource {"controller": "migration-ctrl", "controllerGroup": "global-hub.open-cluster-management.io", "controllerKind": "ManagedClusterMigration", "source": "kind source: *v1alpha1.ManagedClusterMigration"} 2025-08-18T00:30:59.494Z INFO controller/controller.go:175 Starting EventSource {"controller": "migration-ctrl", "controllerGroup": "global-hub.open-cluster-management.io", "controllerKind": "ManagedClusterMigration", "source": "kind source: *v1beta1.ManagedServiceAccount"} 2025-08-18T00:30:59.494Z INFO controller/controller.go:175 Starting EventSource {"controller": "migration-ctrl", "controllerGroup": "global-hub.open-cluster-management.io", "controllerKind": "ManagedClusterMigration", "source": "kind source: *v1.Secret"} 2025-08-18T00:30:59.494Z INFO controller/controller.go:183 Starting Controller {"controller": "migration-ctrl", "controllerGroup": "global-hub.open-cluster-management.io", "controllerKind": "ManagedClusterMigration"} 2025-08-18T00:30:59.605Z INFO controller/controller.go:217 Starting workers {"controller": "migration-ctrl", "controllerGroup": "global-hub.open-cluster-management.io", "controllerKind": "ManagedClusterMigration", "worker count": 1} 2025-08-18T00:30:59.644Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477059494009856 2025-08-18T00:30:59.645Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:30:59.648Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:30:59.651Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477059494009856 (phase: Validating) 2025-08-18T00:30:59.651Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477059494009856 2025-08-18T00:30:59.654Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: ec6442fa-de84-4a3a-9256-d887b3401e2f 2025-08-18T00:30:59.654Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:30:59.654Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:30:59.654Z INFO migration/migration_pending.go:101 update condition ResourceValidated(HubClusterInvalid): source hub non-existent-hub: ManagedCluster.cluster.open-cluster-management.io "non-existent-hub" not found, phase: Failed 2025-08-18T00:30:59.659Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477059494009856 2025-08-18T00:30:59.659Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:30:59.659Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:30:59.867Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477059494009856 2025-08-18T00:30:59.868Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477059494009856 •2025-08-18T00:30:59.875Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: ec6442fa-de84-4a3a-9256-d887b3401e2f 2025-08-18T00:30:59.875Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477059494009856 2025-08-18T00:30:59.875Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:30:59.875Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:00.093Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477059869073939 2025-08-18T00:31:00.093Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:31:00.096Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:00.100Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477059869073939 (phase: Validating) 2025-08-18T00:31:00.100Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477059869073939 2025-08-18T00:31:00.103Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 72284aae-aee6-457a-9d13-cf4cb6e06a0e 2025-08-18T00:31:00.103Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:31:00.103Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:31:00.103Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:31:00.103Z INFO migration/migration_pending.go:101 update condition ResourceValidated(HubClusterInvalid): destination hub non-existent-hub: ManagedCluster.cluster.open-cluster-management.io "non-existent-hub" not found, phase: Failed 2025-08-18T00:31:00.108Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477059869073939 2025-08-18T00:31:00.108Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:00.108Z INFO migration/migration_controller.go:135 no desired managedclustermigration found •2025-08-18T00:31:00.334Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477059869073939 2025-08-18T00:31:00.334Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477059869073939 2025-08-18T00:31:00.339Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 72284aae-aee6-457a-9d13-cf4cb6e06a0e 2025-08-18T00:31:00.339Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477059869073939 2025-08-18T00:31:00.339Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:00.339Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:00.770Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477060334079873 2025-08-18T00:31:00.770Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:31:00.774Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:00.788Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:00.793Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477060334079873 (phase: Validating) 2025-08-18T00:31:00.793Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477060334079873 2025-08-18T00:31:00.796Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 311b6d7a-2078-4cb9-9ab6-cccbdc7f75c5 2025-08-18T00:31:00.796Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:31:00.796Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:31:00.796Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:31:00.796Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:31:00.797Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ClusterNotFound): no valid managed clusters found in database: [non-existent-cluster], phase: Failed 2025-08-18T00:31:00.800Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477060334079873 2025-08-18T00:31:00.800Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:00.800Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:00.987Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477060334079873 2025-08-18T00:31:00.988Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477060334079873 •2025-08-18T00:31:01.003Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 311b6d7a-2078-4cb9-9ab6-cccbdc7f75c5 2025-08-18T00:31:01.003Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477060334079873 2025-08-18T00:31:01.003Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:01.003Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:01.226Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477060988358595 2025-08-18T00:31:01.226Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:31:01.229Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:01.242Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:01.247Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477060988358595 (phase: Validating) 2025-08-18T00:31:01.247Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477060988358595 2025-08-18T00:31:01.251Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: f8d7b7d1-593d-468f-a028-e1cabb128290 2025-08-18T00:31:01.251Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:31:01.252Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:31:01.252Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:31:01.252Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:31:01.252Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477060988358595-dest 2025-08-18T00:31:01.252Z WARN migration/migration_validating.go:246 cluster cluster-test-1755477060988358595-dest is already on hub hub2-test-1755477060988358595 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).validateClustersForMigration /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_validating.go:246 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).validating /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_validating.go:131 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_controller.go:160 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:01.252Z INFO migration/migration_validating.go:251 1 clusters verify failed 2025-08-18T00:31:01.252Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ClusterConflict): 1 clusters validate failed, please check the events for details, phase: Failed 2025-08-18T00:31:01.259Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477060988358595 2025-08-18T00:31:01.259Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:01.259Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:01.450Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477060988358595 2025-08-18T00:31:01.450Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477060988358595 •2025-08-18T00:31:01.457Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: f8d7b7d1-593d-468f-a028-e1cabb128290 2025-08-18T00:31:01.458Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477060988358595 2025-08-18T00:31:01.458Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:01.458Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:01.495Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061455081372 2025-08-18T00:31:01.495Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:31:01.498Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:01.502Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477061455081372 (phase: Validating) 2025-08-18T00:31:01.502Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477061455081372 2025-08-18T00:31:01.505Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: d3a8fb46-09fb-4660-ac45-82e0154928a5 2025-08-18T00:31:01.505Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:31:01.505Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:31:01.505Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:31:01.505Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:31:01.506Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477061455081372 2025-08-18T00:31:01.506Z WARN migration/migration_validating.go:246 cluster cluster-test-1755477061455081372 not found in hub hub1-test-1755477061455081372 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).validateClustersForMigration /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_validating.go:246 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).validating /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_validating.go:131 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_controller.go:160 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:01.506Z INFO migration/migration_validating.go:251 1 clusters verify failed 2025-08-18T00:31:01.506Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ClusterNotFound): 1 clusters validate failed, please check the events for details, phase: Failed 2025-08-18T00:31:01.521Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ClusterNotFound): 1 clusters validate failed, please check the events for details, phase: Failed 2025-08-18T00:31:01.526Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061455081372 2025-08-18T00:31:01.526Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:01.526Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:01.717Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061455081372 2025-08-18T00:31:01.718Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477061455081372 •2025-08-18T00:31:01.726Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: d3a8fb46-09fb-4660-ac45-82e0154928a5 2025-08-18T00:31:01.726Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061455081372 2025-08-18T00:31:01.726Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:01.726Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:01.742Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061718390666 2025-08-18T00:31:01.742Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:31:01.745Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:01.748Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477061718390666 (phase: Validating) 2025-08-18T00:31:01.748Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477061718390666 2025-08-18T00:31:01.751Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 55222ced-543f-49d7-b764-79374361f81d 2025-08-18T00:31:01.751Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:31:01.751Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:31:01.751Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:31:01.751Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:31:01.752Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477061718390666 2025-08-18T00:31:01.752Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:31:01.752Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:31:01.758Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061718390666 2025-08-18T00:31:01.758Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477061718390666 (phase: Initializing) 2025-08-18T00:31:01.758Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477061718390666 2025-08-18T00:31:01.758Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:01.762Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477061718390666/migration-test-1755477061718390666) to be created 2025-08-18T00:31:01.762Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477061718390666/migration-test-1755477061718390666) to be created, phase: Initializing 2025-08-18T00:31:01.766Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061718390666 2025-08-18T00:31:01.766Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477061718390666 (phase: Initializing) 2025-08-18T00:31:01.766Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477061718390666 2025-08-18T00:31:01.766Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:01.766Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477061718390666/migration-test-1755477061718390666) to be created 2025-08-18T00:31:01.956Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "status.healthCheck" 2025-08-18T00:31:06.758Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061718390666 2025-08-18T00:31:06.758Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477061718390666 (phase: Initializing) 2025-08-18T00:31:06.758Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477061718390666 2025-08-18T00:31:06.759Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:06.759Z INFO migration/migration_initializing.go:96 sent initializing event to target hub hub2-test-1755477061718390666 2025-08-18T00:31:06.759Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Error): initializing source hub hub1-test-1755477061718390666 with err :initialization failed, phase: Rollbacking 2025-08-18T00:31:06.764Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:06.764Z INFO migration/migration_rollbacking.go:59 sending rollback event to source hub: hub1-test-1755477061718390666 2025-08-18T00:31:06.764Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477061718390666 to complete Initializing stage rollback, phase: Rollbacking 2025-08-18T00:31:06.768Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061718390666 2025-08-18T00:31:06.768Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477061718390666 (phase: Rollbacking) 2025-08-18T00:31:06.768Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477061718390666 2025-08-18T00:31:06.768Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:06.768Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477061718390666 to complete Initializing stage rollback, phase: Rollbacking 2025-08-18T00:31:06.784Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061718390666 2025-08-18T00:31:06.784Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477061718390666 (phase: Rollbacking) 2025-08-18T00:31:06.784Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477061718390666 2025-08-18T00:31:06.784Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:06.801Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061718390666 2025-08-18T00:31:06.801Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477061718390666 2025-08-18T00:31:06.805Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 55222ced-543f-49d7-b764-79374361f81d 2025-08-18T00:31:06.805Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061718390666 2025-08-18T00:31:06.805Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:06.805Z INFO migration/migration_controller.go:135 no desired managedclustermigration found •2025-08-18T00:31:07.225Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477067001392141 2025-08-18T00:31:07.225Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:31:07.228Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:07.243Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:07.247Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477067001392141 (phase: Validating) 2025-08-18T00:31:07.248Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477067001392141 2025-08-18T00:31:07.252Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 78104601-6076-4227-8be0-36593ee4bcb1 2025-08-18T00:31:07.252Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:31:07.252Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:31:07.252Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:31:07.252Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:31:07.252Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477067001392141 2025-08-18T00:31:07.253Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:31:07.253Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:31:07.256Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477067001392141 2025-08-18T00:31:07.256Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477067001392141 (phase: Validating) 2025-08-18T00:31:07.256Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477067001392141 2025-08-18T00:31:07.256Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:31:07.256Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:31:07.256Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:31:07.256Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:31:07.257Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477067001392141 2025-08-18T00:31:07.257Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:31:07.257Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:31:07.271Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477067001392141 2025-08-18T00:31:07.271Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477067001392141 (phase: Initializing) 2025-08-18T00:31:07.271Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477067001392141 2025-08-18T00:31:07.271Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:07.275Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477067001392141/migration-test-1755477067001392141) to be created 2025-08-18T00:31:07.275Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477067001392141/migration-test-1755477067001392141) to be created, phase: Initializing 2025-08-18T00:31:07.279Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477067001392141 2025-08-18T00:31:07.279Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477067001392141 (phase: Initializing) 2025-08-18T00:31:07.279Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477067001392141 2025-08-18T00:31:07.279Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:07.279Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477067001392141/migration-test-1755477067001392141) to be created 2025-08-18T00:31:11.769Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061718390666 2025-08-18T00:31:11.769Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477067001392141 (phase: Initializing) 2025-08-18T00:31:11.769Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477067001392141 2025-08-18T00:31:11.769Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:11.770Z INFO migration/migration_initializing.go:111 sent initialing events to source hubs: hub1-test-1755477067001392141 2025-08-18T00:31:11.770Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Error): initializing target hub hub2-test-1755477067001392141 with err :initialization failed, phase: Rollbacking 2025-08-18T00:31:11.775Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:11.775Z INFO migration/migration_rollbacking.go:59 sending rollback event to source hub: hub1-test-1755477067001392141 2025-08-18T00:31:11.775Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477067001392141 to complete Initializing stage rollback, phase: Rollbacking 2025-08-18T00:31:11.780Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477067001392141 2025-08-18T00:31:11.780Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477067001392141 (phase: Rollbacking) 2025-08-18T00:31:11.780Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477067001392141 2025-08-18T00:31:11.780Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:11.971Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477067001392141 2025-08-18T00:31:11.971Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477067001392141 2025-08-18T00:31:11.976Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 78104601-6076-4227-8be0-36593ee4bcb1 2025-08-18T00:31:11.976Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477067001392141 2025-08-18T00:31:11.976Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:11.976Z INFO migration/migration_controller.go:135 no desired managedclustermigration found •2025-08-18T00:31:12.200Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072171822772 2025-08-18T00:31:12.200Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:31:12.204Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:12.218Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:12.223Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477072171822772 (phase: Validating) 2025-08-18T00:31:12.223Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477072171822772 2025-08-18T00:31:12.227Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 822f15b2-4f84-43bc-9ac7-9581161c7fa6 2025-08-18T00:31:12.227Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:31:12.227Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:31:12.227Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:31:12.227Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:31:12.228Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477072171822772 2025-08-18T00:31:12.228Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:31:12.228Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:31:12.232Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072171822772 2025-08-18T00:31:12.232Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477072171822772 (phase: Initializing) 2025-08-18T00:31:12.232Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477072171822772 2025-08-18T00:31:12.232Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:12.235Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477072171822772/migration-test-1755477072171822772) to be created 2025-08-18T00:31:12.235Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477072171822772/migration-test-1755477072171822772) to be created, phase: Initializing 2025-08-18T00:31:12.241Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072171822772 2025-08-18T00:31:12.241Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477072171822772 (phase: Initializing) 2025-08-18T00:31:12.241Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477072171822772 2025-08-18T00:31:12.241Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:12.241Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477072171822772/migration-test-1755477072171822772) to be created 2025-08-18T00:31:12.257Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477067001392141 2025-08-18T00:31:12.257Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477072171822772 (phase: Initializing) 2025-08-18T00:31:12.257Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477072171822772 2025-08-18T00:31:12.257Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:12.257Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477072171822772/migration-test-1755477072171822772) to be created 2025-08-18T00:31:12.533Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072171822772 •2025-08-18T00:31:12.534Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477072171822772 2025-08-18T00:31:12.545Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 822f15b2-4f84-43bc-9ac7-9581161c7fa6 2025-08-18T00:31:12.545Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072171822772 2025-08-18T00:31:12.545Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:12.545Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:12.990Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072534127256 2025-08-18T00:31:12.991Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:31:12.994Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:12.999Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477072534127256 (phase: Validating) 2025-08-18T00:31:12.999Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477072534127256 2025-08-18T00:31:13.003Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: b7592d53-abfa-46f5-bafd-dd36ae3f7d64 2025-08-18T00:31:13.003Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:31:13.003Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:31:13.003Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:31:13.003Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:31:13.004Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477072534127256 2025-08-18T00:31:13.004Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:31:13.004Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:31:13.008Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072534127256 2025-08-18T00:31:13.008Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477072534127256 (phase: Initializing) 2025-08-18T00:31:13.008Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477072534127256 2025-08-18T00:31:13.008Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:13.011Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477072534127256/migration-test-1755477072534127256) to be created 2025-08-18T00:31:13.011Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477072534127256/migration-test-1755477072534127256) to be created, phase: Initializing 2025-08-18T00:31:13.016Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072534127256 2025-08-18T00:31:13.016Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477072534127256 (phase: Initializing) 2025-08-18T00:31:13.016Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477072534127256 2025-08-18T00:31:13.016Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:13.016Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477072534127256/migration-test-1755477072534127256) to be created 2025-08-18T00:31:16.799Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061718390666 2025-08-18T00:31:16.799Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477072534127256 (phase: Initializing) 2025-08-18T00:31:16.799Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477072534127256 2025-08-18T00:31:16.799Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:16.807Z INFO migration/migration_initializing.go:142 migration initializing finished 2025-08-18T00:31:16.807Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(ResourceInitialized): All source and target hubs have been successfully initialized, phase: Deploying 2025-08-18T00:31:16.817Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:31:16.817Z INFO migration/migration_deploying.go:50 migration deploying to source hub: hub1-test-1755477072534127256 2025-08-18T00:31:16.817Z INFO migration/migration_pending.go:101 update condition ResourceDeployed(Waiting): waiting for resources to be prepared in the source hub hub1-test-1755477072534127256, phase: Deploying 2025-08-18T00:31:16.823Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072534127256 2025-08-18T00:31:16.823Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477072534127256 (phase: Deploying) 2025-08-18T00:31:16.823Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477072534127256 2025-08-18T00:31:16.823Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:31:17.233Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072171822772 2025-08-18T00:31:17.233Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477072534127256 (phase: Deploying) 2025-08-18T00:31:17.233Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477072534127256 2025-08-18T00:31:17.233Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:31:17.233Z INFO migration/migration_pending.go:101 update condition ResourceDeployed(Error): deploying source hub hub1-test-1755477072534127256 error: deploying failed, phase: Rollbacking 2025-08-18T00:31:17.237Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:17.237Z INFO migration/migration_rollbacking.go:59 sending rollback event to source hub: hub1-test-1755477072534127256 2025-08-18T00:31:17.237Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477072534127256 to complete Deploying stage rollback, phase: Rollbacking 2025-08-18T00:31:17.251Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477072534127256 to complete Deploying stage rollback, phase: Rollbacking 2025-08-18T00:31:17.256Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072534127256 2025-08-18T00:31:17.256Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477072534127256 (phase: Rollbacking) 2025-08-18T00:31:17.256Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477072534127256 2025-08-18T00:31:17.256Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:17.257Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477067001392141 2025-08-18T00:31:17.258Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477072534127256 (phase: Rollbacking) 2025-08-18T00:31:17.258Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477072534127256 2025-08-18T00:31:17.258Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:17.393Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072534127256 2025-08-18T00:31:17.393Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477072534127256 •2025-08-18T00:31:17.422Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: b7592d53-abfa-46f5-bafd-dd36ae3f7d64 2025-08-18T00:31:17.422Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072534127256 2025-08-18T00:31:17.422Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:17.422Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:17.489Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477077393949822 2025-08-18T00:31:17.489Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:31:17.497Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:17.501Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477077393949822 (phase: Validating) 2025-08-18T00:31:17.502Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477077393949822 2025-08-18T00:31:17.506Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 79027320-fce7-40b2-b482-ada5c397c1c4 2025-08-18T00:31:17.506Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:31:17.506Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:31:17.506Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:31:17.506Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:31:17.507Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477077393949822 2025-08-18T00:31:17.507Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:31:17.507Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:31:17.513Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477077393949822 2025-08-18T00:31:17.513Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477077393949822 (phase: Initializing) 2025-08-18T00:31:17.513Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477077393949822 2025-08-18T00:31:17.513Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:17.517Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477077393949822/migration-test-1755477077393949822) to be created 2025-08-18T00:31:17.518Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477077393949822/migration-test-1755477077393949822) to be created, phase: Initializing 2025-08-18T00:31:17.525Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477077393949822 2025-08-18T00:31:17.525Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477077393949822 (phase: Initializing) 2025-08-18T00:31:17.525Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477077393949822 2025-08-18T00:31:17.525Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:17.525Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477077393949822/migration-test-1755477077393949822) to be created 2025-08-18T00:31:18.013Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072534127256 2025-08-18T00:31:18.013Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477077393949822 (phase: Initializing) 2025-08-18T00:31:18.013Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477077393949822 2025-08-18T00:31:18.013Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:18.014Z INFO migration/migration_initializing.go:142 migration initializing finished 2025-08-18T00:31:18.014Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(ResourceInitialized): All source and target hubs have been successfully initialized, phase: Deploying 2025-08-18T00:31:18.019Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:31:18.019Z INFO migration/migration_deploying.go:50 migration deploying to source hub: hub1-test-1755477077393949822 2025-08-18T00:31:18.019Z INFO migration/migration_pending.go:101 update condition ResourceDeployed(Waiting): waiting for resources to be prepared in the source hub hub1-test-1755477077393949822, phase: Deploying 2025-08-18T00:31:18.024Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477077393949822 2025-08-18T00:31:18.024Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477077393949822 (phase: Deploying) 2025-08-18T00:31:18.024Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477077393949822 2025-08-18T00:31:18.024Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:31:21.823Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061718390666 2025-08-18T00:31:21.823Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477077393949822 (phase: Deploying) 2025-08-18T00:31:21.823Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477077393949822 2025-08-18T00:31:21.823Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:31:21.824Z INFO migration/migration_pending.go:101 update condition ResourceDeployed(Error): deploying source hub hub1-test-1755477077393949822 error: deploying failed, phase: Rollbacking 2025-08-18T00:31:21.837Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:21.837Z INFO migration/migration_rollbacking.go:59 sending rollback event to source hub: hub1-test-1755477077393949822 2025-08-18T00:31:21.837Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477077393949822 to complete Deploying stage rollback, phase: Rollbacking 2025-08-18T00:31:21.841Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477077393949822 2025-08-18T00:31:21.841Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477077393949822 (phase: Rollbacking) 2025-08-18T00:31:21.841Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477077393949822 2025-08-18T00:31:21.841Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:21.841Z INFO migration/migration_rollbacking.go:187 managed service account cleanup will be handled by existing deletion logic for migration migration-test-1755477077393949822 2025-08-18T00:31:21.841Z INFO migration/migration_rollbacking.go:134 managed cluster annotation cleanup will be handled by source hub agents 2025-08-18T00:31:21.841Z INFO migration/migration_rollbacking.go:141 migration rollbacking finished - transitioning to Failed 2025-08-18T00:31:21.841Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(ResourceRolledBack): Deploying rollback completed successfully., phase: Failed 2025-08-18T00:31:21.858Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(ResourceRolledBack): Deploying rollback completed successfully., phase: Failed 2025-08-18T00:31:21.864Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477077393949822 2025-08-18T00:31:21.864Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:21.864Z INFO migration/migration_controller.go:135 no desired managedclustermigration found •2025-08-18T00:31:22.071Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477077393949822 2025-08-18T00:31:22.071Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477077393949822 2025-08-18T00:31:22.087Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 79027320-fce7-40b2-b482-ada5c397c1c4 2025-08-18T00:31:22.087Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477077393949822 2025-08-18T00:31:22.087Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:22.087Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:22.256Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072171822772 2025-08-18T00:31:22.257Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:22.257Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:22.258Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477067001392141 2025-08-18T00:31:22.258Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:22.258Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:22.315Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477082067988497 2025-08-18T00:31:22.316Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:31:22.320Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:22.327Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477082067988497 (phase: Validating) 2025-08-18T00:31:22.327Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477082067988497 2025-08-18T00:31:22.336Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 01ef1548-3e0a-4998-80a9-7020f5d0c567 2025-08-18T00:31:22.336Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:31:22.336Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:31:22.336Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:31:22.336Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:31:22.336Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477082067988497 2025-08-18T00:31:22.337Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:31:22.337Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:31:22.343Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477082067988497 2025-08-18T00:31:22.343Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477082067988497 (phase: Initializing) 2025-08-18T00:31:22.343Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477082067988497 2025-08-18T00:31:22.343Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:22.347Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477082067988497/migration-test-1755477082067988497) to be created 2025-08-18T00:31:22.347Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477082067988497/migration-test-1755477082067988497) to be created, phase: Initializing 2025-08-18T00:31:22.352Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477082067988497 2025-08-18T00:31:22.352Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477082067988497 (phase: Initializing) 2025-08-18T00:31:22.352Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477082067988497 2025-08-18T00:31:22.352Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:22.352Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477082067988497/migration-test-1755477082067988497) to be created 2025-08-18T00:31:22.519Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477077393949822 2025-08-18T00:31:22.519Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477082067988497 (phase: Initializing) 2025-08-18T00:31:22.519Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477082067988497 2025-08-18T00:31:22.519Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:22.520Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477082067988497/migration-test-1755477082067988497) to be created 2025-08-18T00:31:23.024Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072534127256 2025-08-18T00:31:23.024Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477082067988497 (phase: Initializing) 2025-08-18T00:31:23.024Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477082067988497 2025-08-18T00:31:23.024Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:23.025Z INFO migration/migration_initializing.go:142 migration initializing finished 2025-08-18T00:31:23.025Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(ResourceInitialized): All source and target hubs have been successfully initialized, phase: Deploying 2025-08-18T00:31:23.030Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:31:23.030Z INFO migration/migration_deploying.go:92 migration deploying finished 2025-08-18T00:31:23.030Z INFO migration/migration_pending.go:101 update condition ResourceDeployed(ResourcesDeployed): Resources have been successfully deployed to the target hub cluster, phase: Registering 2025-08-18T00:31:23.035Z INFO migration/migration_registering.go:34 migration registering 2025-08-18T00:31:23.035Z INFO migration/migration_registering.go:49 migration registering: hub1-test-1755477082067988497 2025-08-18T00:31:23.035Z INFO migration/migration_pending.go:101 update condition ClusterRegistered(Waiting): waiting for managed clusters to migrating from source hub hub1-test-1755477082067988497, phase: Registering 2025-08-18T00:31:23.064Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477082067988497 2025-08-18T00:31:23.064Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477082067988497 (phase: Registering) 2025-08-18T00:31:23.064Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477082067988497 2025-08-18T00:31:23.064Z INFO migration/migration_registering.go:34 migration registering 2025-08-18T00:31:26.841Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061718390666 2025-08-18T00:31:26.841Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477082067988497 (phase: Registering) 2025-08-18T00:31:26.841Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477082067988497 2025-08-18T00:31:26.841Z INFO migration/migration_registering.go:34 migration registering 2025-08-18T00:31:26.841Z INFO migration/migration_pending.go:101 update condition ClusterRegistered(Error): registering to hub hub1-test-1755477082067988497 error: registering failed, phase: Rollbacking 2025-08-18T00:31:26.852Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:26.852Z INFO migration/migration_rollbacking.go:59 sending rollback event to source hub: hub1-test-1755477082067988497 2025-08-18T00:31:26.852Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477082067988497 to complete Registering stage rollback, phase: Rollbacking 2025-08-18T00:31:26.859Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477082067988497 2025-08-18T00:31:26.859Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477082067988497 (phase: Rollbacking) 2025-08-18T00:31:26.859Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477082067988497 2025-08-18T00:31:26.859Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:26.859Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477082067988497 to complete Registering stage rollback, phase: Rollbacking 2025-08-18T00:31:26.879Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477082067988497 2025-08-18T00:31:26.879Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477082067988497 (phase: Rollbacking) 2025-08-18T00:31:26.879Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477082067988497 2025-08-18T00:31:26.879Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:27.344Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477082067988497 2025-08-18T00:31:27.344Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477082067988497 (phase: Rollbacking) 2025-08-18T00:31:27.344Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477082067988497 2025-08-18T00:31:27.344Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:27.344Z INFO migration/migration_rollbacking.go:98 sending rollback event to destination hub: hub2-test-1755477082067988497 2025-08-18T00:31:27.344Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for target hub hub2-test-1755477082067988497 to complete Registering stage rollback, phase: Rollbacking 2025-08-18T00:31:27.349Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477082067988497 2025-08-18T00:31:27.349Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477082067988497 (phase: Rollbacking) 2025-08-18T00:31:27.349Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477082067988497 2025-08-18T00:31:27.349Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:27.522Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477077393949822 2025-08-18T00:31:27.522Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477082067988497 (phase: Rollbacking) 2025-08-18T00:31:27.522Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477082067988497 2025-08-18T00:31:27.522Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:28.065Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072534127256 2025-08-18T00:31:28.065Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477082067988497 (phase: Rollbacking) 2025-08-18T00:31:28.065Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477082067988497 2025-08-18T00:31:28.065Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:31.860Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061718390666 2025-08-18T00:31:31.860Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477082067988497 (phase: Rollbacking) 2025-08-18T00:31:31.860Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477082067988497 2025-08-18T00:31:31.860Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:31.860Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Timeout): [Timeout] waiting for target hub hub2-test-1755477082067988497 to complete Registering stage rollback., phase: Failed 2025-08-18T00:31:31.867Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477082067988497 2025-08-18T00:31:31.867Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:31.867Z INFO migration/migration_controller.go:135 no desired managedclustermigration found •2025-08-18T00:31:32.079Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477082067988497 2025-08-18T00:31:32.079Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477082067988497 2025-08-18T00:31:32.085Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 01ef1548-3e0a-4998-80a9-7020f5d0c567 2025-08-18T00:31:32.085Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477082067988497 2025-08-18T00:31:32.085Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:32.085Z INFO migration/migration_controller.go:135 no desired managedclustermigration found ••2025-08-18T00:31:32.095Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/test-migration-20250818-003132-000 2025-08-18T00:31:32.095Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:31:32.098Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating ••2025-08-18T00:31:32.112Z ERROR migration/migration_pending.go:76 failed to update migration to started: ManagedClusterMigration.global-hub.open-cluster-management.io "test-migration-20250818-003132-000" not found github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).selectAndPrepareMigration /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_pending.go:76 github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_controller.go:129 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.112Z ERROR migration/migration_controller.go:131 failed to get managedclustermigration ManagedClusterMigration.global-hub.open-cluster-management.io "test-migration-20250818-003132-000" not found github.com/stolostron/multicluster-global-hub/manager/pkg/migration.(*ClusterMigrationController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/migration/migration_controller.go:131 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.112Z ERROR controller/controller.go:316 Reconciler error {"controller": "migration-ctrl", "controllerGroup": "global-hub.open-cluster-management.io", "controllerKind": "ManagedClusterMigration", "ManagedClusterMigration": {"name":"test-migration-20250818-003132-000","namespace":"default"}, "namespace": "default", "name": "test-migration-20250818-003132-000", "reconcileID": "a136be7c-e6e3-4a5f-b463-4d831f6179d2", "error": "ManagedClusterMigration.global-hub.open-cluster-management.io \"test-migration-20250818-003132-000\" not found"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.112Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/test-migration-20250818-003132-000 2025-08-18T00:31:32.112Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:32.112Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:32.117Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/test-migration-20250818-003132-000 2025-08-18T00:31:32.117Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:32.117Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:32.132Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092105249519 2025-08-18T00:31:32.132Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:31:32.136Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:32.150Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:32.155Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477092105249519 (phase: Validating) 2025-08-18T00:31:32.155Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477092105249519 2025-08-18T00:31:32.159Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 832d4c80-3a01-42b7-b61e-3082348c7072 2025-08-18T00:31:32.159Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:31:32.159Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:31:32.159Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:31:32.159Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:31:32.160Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477092105249519 2025-08-18T00:31:32.160Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:31:32.160Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:31:32.164Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092105249519 2025-08-18T00:31:32.164Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477092105249519 (phase: Initializing) 2025-08-18T00:31:32.164Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477092105249519 2025-08-18T00:31:32.164Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:32.167Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477092105249519/migration-test-1755477092105249519) to be created 2025-08-18T00:31:32.167Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477092105249519/migration-test-1755477092105249519) to be created, phase: Initializing 2025-08-18T00:31:32.173Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092105249519 2025-08-18T00:31:32.173Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477092105249519 (phase: Initializing) 2025-08-18T00:31:32.173Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477092105249519 2025-08-18T00:31:32.173Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:32.173Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477092105249519/migration-test-1755477092105249519) to be created 2025-08-18T00:31:32.349Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477082067988497 2025-08-18T00:31:32.349Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477092105249519 (phase: Initializing) 2025-08-18T00:31:32.349Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477092105249519 2025-08-18T00:31:32.349Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:32.349Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477092105249519/migration-test-1755477092105249519) to be created 2025-08-18T00:31:32.351Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092105249519 2025-08-18T00:31:32.351Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477092105249519 •2025-08-18T00:31:32.362Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 832d4c80-3a01-42b7-b61e-3082348c7072 2025-08-18T00:31:32.362Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092105249519 2025-08-18T00:31:32.362Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:32.362Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:32.383Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092355585521 2025-08-18T00:31:32.384Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:31:32.388Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:32.392Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477092355585521 (phase: Validating) 2025-08-18T00:31:32.392Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477092355585521 2025-08-18T00:31:32.395Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: b7ab964b-6be7-49e1-916a-391be7430f7e 2025-08-18T00:31:32.396Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:31:32.396Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:31:32.396Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:31:32.396Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:31:32.397Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ClusterNotFound): no valid managed clusters found in database: [non-existent-cluster], phase: Failed 2025-08-18T00:31:32.411Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ClusterNotFound): no valid managed clusters found in database: [non-existent-cluster], phase: Failed 2025-08-18T00:31:32.416Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092355585521 2025-08-18T00:31:32.416Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:32.416Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:32.525Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477077393949822 2025-08-18T00:31:32.526Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:32.526Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:32.600Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092355585521 2025-08-18T00:31:32.600Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477092355585521 •2025-08-18T00:31:32.610Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: b7ab964b-6be7-49e1-916a-391be7430f7e 2025-08-18T00:31:32.610Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092355585521 2025-08-18T00:31:32.611Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:32.611Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:32.830Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092600686303 2025-08-18T00:31:32.830Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:31:32.833Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:32.847Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:32.852Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477092600686303 (phase: Validating) 2025-08-18T00:31:32.852Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477092600686303 2025-08-18T00:31:32.857Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 2a0b2157-aaaf-4b3c-b33d-b863e1edcf74 2025-08-18T00:31:32.857Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:31:32.857Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:31:32.857Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:31:32.857Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:31:32.858Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477092600686303 2025-08-18T00:31:32.858Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:31:32.858Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:31:32.862Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092600686303 2025-08-18T00:31:32.863Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477092600686303 (phase: Initializing) 2025-08-18T00:31:32.863Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477092600686303 2025-08-18T00:31:32.863Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:32.866Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477092600686303/migration-test-1755477092600686303) to be created 2025-08-18T00:31:32.866Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477092600686303/migration-test-1755477092600686303) to be created, phase: Initializing 2025-08-18T00:31:32.871Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092600686303 2025-08-18T00:31:32.871Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477092600686303 (phase: Initializing) 2025-08-18T00:31:32.871Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477092600686303 2025-08-18T00:31:32.871Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:32.871Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477092600686303/migration-test-1755477092600686303) to be created 2025-08-18T00:31:33.065Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072534127256 2025-08-18T00:31:33.065Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477092600686303 (phase: Initializing) 2025-08-18T00:31:33.065Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477092600686303 2025-08-18T00:31:33.065Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:33.065Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477092600686303/migration-test-1755477092600686303) to be created 2025-08-18T00:31:36.868Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061718390666 2025-08-18T00:31:36.868Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477092600686303 (phase: Initializing) 2025-08-18T00:31:36.868Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477092600686303 2025-08-18T00:31:36.868Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:36.868Z INFO migration/migration_initializing.go:142 migration initializing finished 2025-08-18T00:31:36.868Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(ResourceInitialized): All source and target hubs have been successfully initialized, phase: Deploying 2025-08-18T00:31:36.880Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:31:36.880Z INFO migration/migration_deploying.go:50 migration deploying to source hub: hub1-test-1755477092600686303 2025-08-18T00:31:36.881Z INFO migration/migration_pending.go:101 update condition ResourceDeployed(Waiting): waiting for resources to be prepared in the source hub hub1-test-1755477092600686303, phase: Deploying 2025-08-18T00:31:36.884Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092600686303 2025-08-18T00:31:36.885Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477092600686303 (phase: Deploying) 2025-08-18T00:31:36.885Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477092600686303 2025-08-18T00:31:36.885Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:31:36.885Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092600686303 2025-08-18T00:31:36.885Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477092600686303 (phase: Deploying) 2025-08-18T00:31:36.885Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477092600686303 2025-08-18T00:31:36.885Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:31:36.973Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092600686303 2025-08-18T00:31:36.973Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477092600686303 2025-08-18T00:31:36.979Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 2a0b2157-aaaf-4b3c-b33d-b863e1edcf74 2025-08-18T00:31:36.979Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092600686303 2025-08-18T00:31:36.979Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:36.979Z INFO migration/migration_controller.go:135 no desired managedclustermigration found •2025-08-18T00:31:37.004Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477096980468251 2025-08-18T00:31:37.004Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:31:37.008Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:37.011Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477096980468251 (phase: Validating) 2025-08-18T00:31:37.011Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477096980468251 2025-08-18T00:31:37.015Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 03e8b361-48ca-4920-b6ca-6974f6b98d68 2025-08-18T00:31:37.015Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:31:37.015Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:31:37.015Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:31:37.015Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:31:37.016Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477096980468251 2025-08-18T00:31:37.016Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:31:37.016Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:31:37.019Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477096980468251 2025-08-18T00:31:37.019Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477096980468251 (phase: Initializing) 2025-08-18T00:31:37.019Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477096980468251 2025-08-18T00:31:37.019Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:37.022Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477096980468251/migration-test-1755477096980468251) to be created 2025-08-18T00:31:37.022Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477096980468251/migration-test-1755477096980468251) to be created, phase: Initializing 2025-08-18T00:31:37.027Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477096980468251 2025-08-18T00:31:37.027Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477096980468251 (phase: Initializing) 2025-08-18T00:31:37.027Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477096980468251 2025-08-18T00:31:37.027Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:37.027Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477096980468251/migration-test-1755477096980468251) to be created 2025-08-18T00:31:37.165Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092105249519 2025-08-18T00:31:37.165Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477096980468251 (phase: Initializing) 2025-08-18T00:31:37.165Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477096980468251 2025-08-18T00:31:37.165Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:37.165Z INFO migration/migration_initializing.go:96 sent initializing event to target hub hub2-test-1755477096980468251 2025-08-18T00:31:37.165Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Error): initializing source hub hub1-test-1755477096980468251 with err :initialization failed, phase: Rollbacking 2025-08-18T00:31:37.170Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:37.170Z INFO migration/migration_rollbacking.go:59 sending rollback event to source hub: hub1-test-1755477096980468251 2025-08-18T00:31:37.170Z INFO migration/migration_pending.go:101 update condition ResourceRolledBack(Waiting): waiting for source hub hub1-test-1755477096980468251 to complete Initializing stage rollback, phase: Rollbacking 2025-08-18T00:31:37.174Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477096980468251 2025-08-18T00:31:37.174Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477096980468251 (phase: Rollbacking) 2025-08-18T00:31:37.174Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477096980468251 2025-08-18T00:31:37.175Z INFO migration/migration_rollbacking.go:40 migration rollbacking started 2025-08-18T00:31:37.335Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477096980468251 2025-08-18T00:31:37.335Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477096980468251 •2025-08-18T00:31:37.341Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 03e8b361-48ca-4920-b6ca-6974f6b98d68 2025-08-18T00:31:37.341Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477096980468251 2025-08-18T00:31:37.341Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:37.341Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:37.349Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477082067988497 2025-08-18T00:31:37.349Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:37.349Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:37.561Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477097336037468 2025-08-18T00:31:37.561Z INFO migration/migration_pending.go:101 update condition MigrationStarted(Waiting): Waiting for the migration to start, phase: Pending 2025-08-18T00:31:37.569Z INFO migration/migration_pending.go:101 update condition MigrationStarted(InstanceStarted): Migration instance is started, phase: Validating 2025-08-18T00:31:37.573Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477097336037468 (phase: Validating) 2025-08-18T00:31:37.573Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477097336037468 2025-08-18T00:31:37.577Z INFO migration/migration_eventstatus.go:30 initialize migration status for migrationId: 5e940aad-43b1-4506-b9f0-198d8a8f4e80 2025-08-18T00:31:37.577Z INFO migration/migration_validating.go:78 migration validating 2025-08-18T00:31:37.577Z INFO migration/migration_validating.go:103 migration validating from hub 2025-08-18T00:31:37.577Z INFO migration/migration_validating.go:115 migration validating to hub 2025-08-18T00:31:37.577Z INFO migration/migration_validating.go:128 migration validating clusters 2025-08-18T00:31:37.577Z INFO migration/migration_validating.go:240 verify cluster: cluster-test-1755477097336037468 2025-08-18T00:31:37.577Z INFO migration/migration_validating.go:251 0 clusters verify failed 2025-08-18T00:31:37.577Z INFO migration/migration_pending.go:101 update condition ResourceValidated(ResourceValidated): Migration resources have been validated, phase: Initializing 2025-08-18T00:31:37.581Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477097336037468 2025-08-18T00:31:37.581Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477097336037468 (phase: Initializing) 2025-08-18T00:31:37.581Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477097336037468 2025-08-18T00:31:37.581Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:37.584Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477097336037468/migration-test-1755477097336037468) to be created 2025-08-18T00:31:37.584Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(Waiting): waiting for token secret (hub2-test-1755477097336037468/migration-test-1755477097336037468) to be created, phase: Initializing 2025-08-18T00:31:37.588Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477097336037468 2025-08-18T00:31:37.588Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477097336037468 (phase: Initializing) 2025-08-18T00:31:37.588Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477097336037468 2025-08-18T00:31:37.588Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:37.589Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477097336037468/migration-test-1755477097336037468) to be created 2025-08-18T00:31:37.863Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092600686303 2025-08-18T00:31:37.863Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477097336037468 (phase: Initializing) 2025-08-18T00:31:37.863Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477097336037468 2025-08-18T00:31:37.863Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:37.863Z INFO migration/migration_initializing.go:76 waiting for token secret (hub2-test-1755477097336037468/migration-test-1755477097336037468) to be created 2025-08-18T00:31:38.066Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072534127256 2025-08-18T00:31:38.066Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477097336037468 (phase: Initializing) 2025-08-18T00:31:38.066Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477097336037468 2025-08-18T00:31:38.066Z INFO migration/migration_initializing.go:52 migration initializing started 2025-08-18T00:31:38.066Z INFO migration/migration_initializing.go:142 migration initializing finished 2025-08-18T00:31:38.066Z INFO migration/migration_pending.go:101 update condition ResourceInitialized(ResourceInitialized): All source and target hubs have been successfully initialized, phase: Deploying 2025-08-18T00:31:38.072Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:31:38.072Z INFO migration/migration_deploying.go:50 migration deploying to source hub: hub1-test-1755477097336037468 2025-08-18T00:31:38.072Z INFO migration/migration_pending.go:101 update condition ResourceDeployed(Waiting): waiting for resources to be prepared in the source hub hub1-test-1755477097336037468, phase: Deploying 2025-08-18T00:31:38.077Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477097336037468 2025-08-18T00:31:38.077Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477097336037468 (phase: Deploying) 2025-08-18T00:31:38.077Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477097336037468 2025-08-18T00:31:38.077Z INFO migration/migration_deploying.go:33 migration deploying 2025-08-18T00:31:38.077Z INFO migration/migration_deploying.go:92 migration deploying finished 2025-08-18T00:31:38.077Z INFO migration/migration_pending.go:101 update condition ResourceDeployed(ResourcesDeployed): Resources have been successfully deployed to the target hub cluster, phase: Registering 2025-08-18T00:31:38.082Z INFO migration/migration_registering.go:34 migration registering 2025-08-18T00:31:38.082Z INFO migration/migration_registering.go:49 migration registering: hub1-test-1755477097336037468 2025-08-18T00:31:38.083Z INFO migration/migration_pending.go:101 update condition ClusterRegistered(Waiting): waiting for managed clusters to migrating from source hub hub1-test-1755477097336037468, phase: Registering 2025-08-18T00:31:38.088Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477097336037468 2025-08-18T00:31:38.088Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477097336037468 (phase: Registering) 2025-08-18T00:31:38.088Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477097336037468 2025-08-18T00:31:38.088Z INFO migration/migration_registering.go:34 migration registering 2025-08-18T00:31:41.885Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061718390666 2025-08-18T00:31:41.885Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477097336037468 (phase: Registering) 2025-08-18T00:31:41.885Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477097336037468 2025-08-18T00:31:41.885Z INFO migration/migration_registering.go:34 migration registering 2025-08-18T00:31:41.885Z INFO migration/migration_pending.go:101 update condition ClusterRegistered(ClusterRegistered): All migrated clusters have been successfully registered, phase: Cleaning 2025-08-18T00:31:41.890Z INFO migration/migration_cleaning.go:37 migration start cleaning 2025-08-18T00:31:41.893Z INFO migration/migration_pending.go:101 update condition ResourceCleaned(Waiting): The target hub hub2-test-1755477097336037468 is cleaning, phase: Cleaning 2025-08-18T00:31:41.897Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477097336037468 2025-08-18T00:31:41.897Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477097336037468 (phase: Cleaning) 2025-08-18T00:31:41.897Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477097336037468 2025-08-18T00:31:41.897Z INFO migration/migration_cleaning.go:37 migration start cleaning 2025-08-18T00:31:41.897Z INFO migration/migration_pending.go:101 update condition ResourceCleaned(Waiting): The target hub hub2-test-1755477097336037468 is cleaning, phase: Cleaning 2025-08-18T00:31:41.911Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477097336037468 2025-08-18T00:31:41.911Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477097336037468 (phase: Cleaning) 2025-08-18T00:31:41.911Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477097336037468 2025-08-18T00:31:41.911Z INFO migration/migration_cleaning.go:37 migration start cleaning 2025-08-18T00:31:42.020Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477096980468251 2025-08-18T00:31:42.020Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477097336037468 (phase: Cleaning) 2025-08-18T00:31:42.020Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477097336037468 2025-08-18T00:31:42.020Z INFO migration/migration_cleaning.go:37 migration start cleaning 2025-08-18T00:31:42.175Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092105249519 2025-08-18T00:31:42.175Z INFO migration/migration_pending.go:82 selected migration: migration-test-1755477097336037468 (phase: Cleaning) 2025-08-18T00:31:42.175Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477097336037468 2025-08-18T00:31:42.175Z INFO migration/migration_cleaning.go:37 migration start cleaning 2025-08-18T00:31:42.175Z INFO migration/migration_cleaning.go:112 migration cleaning finished 2025-08-18T00:31:42.175Z INFO migration/migration_pending.go:101 update condition ResourceCleaned(ResourceCleaned): Resources have been successfully cleaned up from the hub clusters, phase: Completed 2025-08-18T00:31:42.181Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477097336037468 2025-08-18T00:31:42.181Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:42.181Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:42.301Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477097336037468 2025-08-18T00:31:42.302Z INFO migration/migration_controller.go:139 processing migration instance: migration-test-1755477097336037468 •2025-08-18T00:31:42.307Z INFO migration/migration_eventstatus.go:38 clean up migration status for migrationId: 5e940aad-43b1-4506-b9f0-198d8a8f4e80 2025-08-18T00:31:42.307Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477097336037468 2025-08-18T00:31:42.307Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:42.307Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:42.582Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477097336037468 2025-08-18T00:31:42.582Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:42.582Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:42.864Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477092600686303 2025-08-18T00:31:42.864Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:42.864Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:43.078Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477072534127256 2025-08-18T00:31:43.078Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:43.078Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:46.897Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477061718390666 2025-08-18T00:31:46.897Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:46.897Z INFO migration/migration_controller.go:135 no desired managedclustermigration found 2025-08-18T00:31:47.021Z INFO migration/migration_controller.go:126 reconcile managed cluster migration default/migration-test-1755477096980468251 2025-08-18T00:31:47.021Z INFO migration/migration_pending.go:84 no migration selected 2025-08-18T00:31:47.021Z INFO migration/migration_controller.go:135 no desired managedclustermigration found waiting for server to shut down...2025-08-18 00:32:04.325 UTC [25446] LOG: received fast shutdown request .2025-08-18 00:32:04.326 UTC [25446] LOG: aborting any active transactions 2025-08-18 00:32:04.327 UTC [25446] LOG: background worker "logical replication launcher" (PID 25452) exited with exit code 1 2025-08-18 00:32:04.327 UTC [25447] LOG: shutting down 2025-08-18 00:32:04.327 UTC [25447] LOG: checkpoint starting: shutdown immediate 2025-08-18 00:32:04.340 UTC [25447] LOG: checkpoint complete: wrote 1021 buffers (6.2%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.011 s, sync=0.002 s, total=0.013 s; sync files=481, longest=0.001 s, average=0.001 s; distance=5360 kB, estimate=5360 kB; lsn=0/1A1A870, redo lsn=0/1A1A870 2025-08-18 00:32:04.346 UTC [25446] LOG: database system is shut down done server stopped Ran 20 of 20 Specs in 73.484 seconds SUCCESS! -- 2025-08-18T00:32:04.426Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:32:04.427Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 20 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestController (73.48s) PASS 2025-08-18T00:32:04.427Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "migration-ctrl", "controllerGroup": "global-hub.open-cluster-management.io", "controllerKind": "ManagedClusterMigration"} 2025-08-18T00:32:04.427Z INFO controller/controller.go:239 All workers finished {"controller": "migration-ctrl", "controllerGroup": "global-hub.open-cluster-management.io", "controllerKind": "ManagedClusterMigration"} 2025-08-18T00:32:04.427Z INFO manager/internal.go:550 Stopping and waiting for caches ok github.com/stolostron/multicluster-global-hub/test/integration/manager/migration 73.527s failed to get CustomResourceDefinition for subscriptionreports.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptionreports.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-7m89ydg2:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for subscriptions.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptions.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-7m89ydg2:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for policies.policy.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "policies.policy.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-7m89ydg2:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope=== RUN TestSpecSyncer Running Suite: Spec Syncer Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/spec ====================================================================================================================== Random Seed: 1755477064 Will run 16 of 16 specs The files belonging to this database system will be owned by user "1002610000". This user must also own the server process. The database cluster will be initialized with locale "C". The default database encoding has accordingly been set to "SQL_ASCII". The default text search configuration will be set to "english". Data page checksums are disabled. creating directory /tmp/tmp/embedded-postgres-go-4252/extracted/data ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... UTC creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok Success. You can now start the database server using: /tmp/tmp/embedded-postgres-go-4252/extracted/bin/pg_ctl -D /tmp/tmp/embedded-postgres-go-4252/extracted/data -l logfile start waiting for server to start....2025-08-18 00:31:14.726 UTC [26171] LOG: starting PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit 2025-08-18 00:31:14.727 UTC [26171] LOG: listening on IPv6 address "::1", port 4252 2025-08-18 00:31:14.727 UTC [26171] LOG: listening on IPv4 address "127.0.0.1", port 4252 2025-08-18 00:31:14.727 UTC [26171] LOG: listening on Unix socket "/tmp/.s.PGSQL.4252" 2025-08-18 00:31:14.754 UTC [26181] LOG: database system was shut down at 2025-08-18 00:31:14 UTC 2025-08-18 00:31:14.757 UTC [26171] LOG: database system is ready to accept connections done server started 2025-08-18T00:31:14.918Z INFO utils/utils.go:71 failed to read file ca-cert-path - open ca-cert-path: no such file or directory script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. script 1.upgrade.sql executed successfully. script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. 2025-08-18T00:31:15.221Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:31:15.223Z INFO spec/dispatcher.go:51 started dispatching received bundles... 2025-08-18T00:31:15.223Z INFO db-to-transport-syncer-policy syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:31:15.223Z INFO db-to-transport-syncer-placementrulebiding syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:31:15.224Z INFO db-to-transport-syncer-placementrule syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:31:15.224Z INFO db-to-transport-syncer-subscriptions syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:31:15.224Z INFO db-to-transport-syncer-application syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:31:15.224Z INFO db-to-transport-syncer-managedclusterlabel syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:31:15.224Z INFO db-to-transport-syncer-channels syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:31:15.224Z INFO db-to-transport-syncer-managedclusterset syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:31:15.224Z INFO db-to-transport-syncer-placements syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:31:15.224Z INFO managed-cluster-labels-syncer syncers/managedcluster_labels_watcher.go:49 initialized watcherspecmanaged_clusters_labelsstatus tablemanaged_clusters 2025-08-18T00:31:15.224Z INFO db-to-transport-syncer-managedclustersetbinding syncers/generic_syncer.go:26 initialized syncer 2025-08-18T00:31:15.224Z INFO controller/controller.go:175 Starting EventSource {"controller": "placementrule", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "PlacementRule", "source": "kind source: *v1.PlacementRule"} 2025-08-18T00:31:15.225Z INFO controller/controller.go:183 Starting Controller {"controller": "placementrule", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "PlacementRule"} 2025-08-18T00:31:15.225Z INFO controller/controller.go:175 Starting EventSource {"controller": "managedclusterset", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSet", "source": "kind source: *v1beta2.ManagedClusterSet"} 2025-08-18T00:31:15.225Z INFO controller/controller.go:183 Starting Controller {"controller": "managedclusterset", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSet"} 2025-08-18T00:31:15.225Z INFO controller/controller.go:175 Starting EventSource {"controller": "managedclustersetbinding", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSetBinding", "source": "kind source: *v1beta2.ManagedClusterSetBinding"} 2025-08-18T00:31:15.225Z INFO controller/controller.go:183 Starting Controller {"controller": "managedclustersetbinding", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSetBinding"} 2025-08-18T00:31:15.225Z INFO controller/controller.go:175 Starting EventSource {"controller": "channel", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Channel", "source": "kind source: *v1.Channel"} 2025-08-18T00:31:15.225Z INFO controller/controller.go:183 Starting Controller {"controller": "channel", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Channel"} 2025-08-18T00:31:15.225Z INFO controller/controller.go:175 Starting EventSource {"controller": "placementbinding", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "PlacementBinding", "source": "kind source: *v1.PlacementBinding"} 2025-08-18T00:31:15.225Z INFO controller/controller.go:183 Starting Controller {"controller": "placementbinding", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "PlacementBinding"} 2025-08-18T00:31:15.225Z INFO controller/controller.go:175 Starting EventSource {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement", "source": "kind source: *v1beta1.Placement"} 2025-08-18T00:31:15.225Z INFO controller/controller.go:183 Starting Controller {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement"} 2025-08-18T00:31:15.225Z INFO controller/controller.go:175 Starting EventSource {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy", "source": "kind source: *v1.Policy"} 2025-08-18T00:31:15.225Z INFO controller/controller.go:183 Starting Controller {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:31:15.225Z INFO controller/controller.go:175 Starting EventSource {"controller": "application", "controllerGroup": "app.k8s.io", "controllerKind": "Application", "source": "kind source: *v1beta1.Application"} 2025-08-18T00:31:15.225Z INFO controller/controller.go:183 Starting Controller {"controller": "application", "controllerGroup": "app.k8s.io", "controllerKind": "Application"} 2025-08-18T00:31:15.224Z INFO controller/controller.go:175 Starting EventSource {"controller": "subscription", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Subscription", "source": "kind source: *v1.Subscription"} 2025-08-18T00:31:15.225Z INFO controller/controller.go:183 Starting Controller {"controller": "subscription", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Subscription"} checking postgres... 2025-08-18T00:31:15.277Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "ManagedClusterSetBindings"} 2025-08-18T00:31:15.277Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "PlacementRules"} 2025-08-18T00:31:15.277Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "Applications"} 2025-08-18T00:31:15.277Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "ManagedClustersLabels"} 2025-08-18T00:31:15.277Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "ManagedClusterSets"} 2025-08-18T00:31:15.277Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "Policies"} 2025-08-18T00:31:15.277Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "PlacementBindings"} 2025-08-18T00:31:15.277Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "Placements"} 2025-08-18T00:31:15.277Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "Subscriptions"} 2025-08-18T00:31:15.277Z INFO spec/dispatcher.go:46 dispatch syncer is registered {"messageID": "Channels"} agent spec sync the resource from manager: PlacementRules agent spec sync the resource from manager: Applications 2025-08-18T00:31:15.379Z INFO controller/controller.go:217 Starting workers {"controller": "subscription", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Subscription", "worker count": 1} 2025-08-18T00:31:15.379Z INFO controller/controller.go:217 Starting workers {"controller": "application", "controllerGroup": "app.k8s.io", "controllerKind": "Application", "worker count": 1} 2025-08-18T00:31:15.428Z INFO controller/controller.go:217 Starting workers {"controller": "placementrule", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "PlacementRule", "worker count": 1} 2025-08-18T00:31:15.451Z INFO controller/controller.go:217 Starting workers {"controller": "managedclustersetbinding", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSetBinding", "worker count": 1} 2025-08-18T00:31:15.451Z INFO controller/controller.go:217 Starting workers {"controller": "channel", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Channel", "worker count": 1} 2025-08-18T00:31:15.451Z INFO controller/controller.go:217 Starting workers {"controller": "managedclusterset", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSet", "worker count": 1} 2025-08-18T00:31:15.452Z INFO controller/controller.go:217 Starting workers {"controller": "placementbinding", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "PlacementBinding", "worker count": 1} 2025-08-18T00:31:15.472Z INFO controller/controller.go:217 Starting workers {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement", "worker count": 1} 2025-08-18T00:31:15.472Z INFO controller/controller.go:217 Starting workers {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy", "worker count": 1} agent spec sync the resource from manager: Policies agent spec sync the resource from manager: Subscriptions agent spec sync the resource from manager: PlacementBindings agent spec sync the resource from manager: Channels agent spec sync the resource from manager: ManagedClustersLabels agent spec sync the resource from manager: Placements agent spec sync the resource from manager: ManagedClusterSetBindings agent spec sync the resource from manager: ManagedClusterSets •2025-08-18T00:31:16.325Z INFO placementrules-spec-syncer controllers/generic.go:128 Adding finalizer {"Request.Namespace": "default", "Request.Name": "test-placementrule-1"} ••2025-08-18T00:31:16.334Z INFO managedclustersets-spec-syncer controllers/generic.go:128 Adding finalizer {"Request.Namespace": "", "Request.Name": "test-managedclusterset-1"} ••2025-08-18T00:31:16.343Z INFO placements-spec-syncer controllers/generic.go:128 Adding finalizer {"Request.Namespace": "default", "Request.Name": "test-placement-1"} 2025-08-18T00:31:16.343Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.decisionStrategy" 2025-08-18T00:31:16.343Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.spreadPolicy" 2025-08-18T00:31:16.343Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "status.decisionGroups" •2025-08-18T00:31:16.352Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.decisionStrategy" 2025-08-18T00:31:16.352Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.spreadPolicy" 2025-08-18T00:31:16.352Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "status.decisionGroups" •agent spec sync the resource from manager: PlacementRules 2025-08-18T00:31:16.363Z INFO policies-spec-syncer controllers/generic.go:128 Adding finalizer {"Request.Namespace": "default", "Request.Name": "test-policy-1"} •agent spec sync the resource from manager: Policies agent spec sync the resource from manager: ManagedClusterSets agent spec sync the resource from manager: Placements agent spec sync the resource from manager: ManagedClustersLabels •2025-08-18T00:31:17.379Z INFO policies-spec-syncer controllers/generic.go:89 Mismatch between hub and the database, updating the database {"Request.Namespace": "default", "Request.Name": "test-policy-1"} 2025-08-18T00:31:18.275Z INFO db-to-transport-syncer-policy syncers/generic_syncer.go:76 sync interval has been reset to 2s agent spec sync the resource from manager: Policies •2025-08-18T00:31:18.388Z INFO policies-spec-syncer controllers/generic.go:106 Removing an instance from the database {"Request.Namespace": "default", "Request.Name": "test-policy-1"} 2025-08-18T00:31:18.398Z INFO policies-spec-syncer controllers/generic.go:113 Removing finalizer {"Request.Namespace": "default", "Request.Name": "test-policy-1"} 2025-08-18T00:31:18.403Z INFO policies-spec-syncer controllers/generic.go:128 Adding finalizer {"Request.Namespace": "default", "Request.Name": "test-policy-1"} 2025-08-18T00:31:18.408Z INFO controller/controller.go:314 Warning: Reconciler returned both a non-zero result and a non-nil error. The result will always be ignored if the error is non-nil and the non-nil error causes reqeueuing with exponential backoff. For more details, see: https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/reconcile#Reconciler {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy", "Policy": {"name":"test-policy-1","namespace":"default"}, "namespace": "default", "name": "test-policy-1", "reconcileID": "5c99642d-02cd-4fb3-825e-3768503c70cf"} 2025-08-18T00:31:18.408Z ERROR controller/controller.go:316 Reconciler error {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy", "Policy": {"name":"test-policy-1","namespace":"default"}, "namespace": "default", "name": "test-policy-1", "reconcileID": "5c99642d-02cd-4fb3-825e-3768503c70cf", "error": "failed to add finalzier: failed to add a finalizer: Operation cannot be fulfilled on policies.policy.open-cluster-management.io \"test-policy-1\": StorageError: invalid object, Code: 4, Key: /registry/policy.open-cluster-management.io/policies/default/test-policy-1, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: edf2a4f3-9747-4f1e-8f70-7d1a62ded986, UID in object meta: "} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •spec.applications: default - test-application-1 2025-08-18T00:31:19.406Z INFO applications-spec-controller controllers/generic.go:128 Adding finalizer {"Request.Namespace": "default", "Request.Name": "app1"} agent spec sync the resource from manager: Policies agent spec sync the resource from manager: Applications spec.applications: default - test-application-1 spec.applications: default - app1 •2025-08-18T00:31:20.427Z INFO managedclustersetbindings-spec-syncer controllers/generic.go:128 Adding finalizer {"Request.Namespace": "default", "Request.Name": "test-managedclustersetbinding-1"} ••2025-08-18T00:31:20.482Z INFO channels-spec-controller controllers/generic.go:128 Adding finalizer {"Request.Namespace": "default", "Request.Name": "ch2"} spec.channels: default - test-channel-1 2025-08-18T00:31:21.247Z INFO managed-cluster-labels-syncer syncers/managedcluster_labels_watcher.go:93 trimming interval has been reset to 4s agent spec sync the resource from manager: ManagedClusterSetBindings agent spec sync the resource from manager: Channels spec.channels: default - test-channel-1 spec.channels: default - ch2 •2025-08-18T00:31:21.529Z INFO subscriptions-spec-syncer controllers/generic.go:128 Adding finalizer {"Request.Namespace": "default", "Request.Name": "sub2"} spec.subscriptions: default - test-subscription-1 agent spec sync the resource from manager: Subscriptions 2025-08-18T00:31:22.325Z INFO db-to-transport-syncer-policy syncers/generic_syncer.go:76 sync interval has been reset to 1s spec.subscriptions: default - test-subscription-1 spec.subscriptions: default - sub2 •2025-08-18T00:31:22.543Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:31:22.543Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:31:22.543Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:31:22.544Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement"} 2025-08-18T00:31:22.544Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "placementbinding", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "PlacementBinding"} 2025-08-18T00:31:22.544Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "managedclusterset", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSet"} 2025-08-18T00:31:22.544Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "channel", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Channel"} 2025-08-18T00:31:22.544Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "managedclustersetbinding", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSetBinding"} 2025-08-18T00:31:22.544Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "placementrule", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "PlacementRule"} 2025-08-18T00:31:22.544Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "application", "controllerGroup": "app.k8s.io", "controllerKind": "Application"} 2025-08-18T00:31:22.544Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "subscription", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Subscription"} 2025-08-18T00:31:22.544Z INFO db-to-transport-syncer-application syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:31:22.544Z INFO db-to-transport-syncer-placementrule syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:31:22.544Z INFO db-to-transport-syncer-managedclustersetbinding syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:31:22.544Z INFO db-to-transport-syncer-managedclusterlabel syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:31:22.544Z INFO db-to-transport-syncer-channels syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:31:22.544Z INFO db-to-transport-syncer-placementrulebiding syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:31:22.544Z INFO db-to-transport-syncer-placements syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:31:22.544Z INFO db-to-transport-syncer-managedclusterset syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:31:22.544Z INFO db-to-transport-syncer-subscriptions syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:31:22.544Z INFO db-to-transport-syncer-policy syncers/generic_syncer.go:35 stopped syncer 2025-08-18T00:31:22.544Z INFO managed-cluster-labels-syncer syncers/managedcluster_labels_watcher.go:52 stopped watcherspecmanaged_clusters_labelsstatus tablemanaged_clusters 2025-08-18T00:31:22.544Z INFO spec/dispatcher.go:56 stopped dispatching bundles 2025-08-18T00:31:22.544Z INFO consumer/generic_consumer.go:179 receiver stopped 2025-08-18T00:31:22.544Z INFO controller/controller.go:239 All workers finished {"controller": "placementrule", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "PlacementRule"} 2025-08-18T00:31:22.544Z INFO controller/controller.go:239 All workers finished {"controller": "application", "controllerGroup": "app.k8s.io", "controllerKind": "Application"} 2025-08-18T00:31:22.544Z INFO controller/controller.go:239 All workers finished {"controller": "policy", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "Policy"} 2025-08-18T00:31:22.544Z INFO controller/controller.go:239 All workers finished {"controller": "placement", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "Placement"} 2025-08-18T00:31:22.544Z INFO controller/controller.go:239 All workers finished {"controller": "placementbinding", "controllerGroup": "policy.open-cluster-management.io", "controllerKind": "PlacementBinding"} 2025-08-18T00:31:22.544Z INFO controller/controller.go:239 All workers finished {"controller": "channel", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Channel"} 2025-08-18T00:31:22.544Z INFO controller/controller.go:239 All workers finished {"controller": "managedclustersetbinding", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSetBinding"} 2025-08-18T00:31:22.544Z INFO controller/controller.go:239 All workers finished {"controller": "managedclusterset", "controllerGroup": "cluster.open-cluster-management.io", "controllerKind": "ManagedClusterSet"} 2025-08-18T00:31:22.544Z INFO controller/controller.go:239 All workers finished {"controller": "subscription", "controllerGroup": "apps.open-cluster-management.io", "controllerKind": "Subscription"} 2025-08-18T00:31:22.544Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:31:22.544471 25536 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.Subscription" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:22.544539 25536 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1beta1.Application" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:22.544619 25536 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1beta1.Placement" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:31:22.544Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:31:22.544Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:31:22.544Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager 2025-08-18 00:31:22.546 UTC [26171] LOG: received fast shutdown request 2025-08-18 00:31:22.552 UTC [26171] LOG: aborting any active transactions 2025-08-18 00:31:22.552 UTC [26194] FATAL: terminating connection due to administrator command 2025-08-18 00:31:22.553 UTC [26195] FATAL: terminating connection due to administrator command waiting for server to shut down....2025-08-18 00:31:22.556 UTC [26171] LOG: background worker "logical replication launcher" (PID 26189) exited with exit code 1 2025-08-18 00:31:22.558 UTC [26311] FATAL: terminating connection due to administrator command 2025-08-18 00:31:22.562 UTC [26179] LOG: shutting down 2025-08-18 00:31:22.563 UTC [26179] LOG: checkpoint starting: shutdown immediate 2025-08-18 00:31:22.607 UTC [26179] LOG: checkpoint complete: wrote 1049 buffers (6.4%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.034 s, sync=0.011 s, total=0.045 s; sync files=481, longest=0.008 s, average=0.001 s; distance=5327 kB, estimate=5327 kB; lsn=0/1A127B0, redo lsn=0/1A127B0 2025-08-18 00:31:22.623 UTC [26171] LOG: database system is shut down done server stopped Ran 16 of 16 Specs in 19.353 seconds SUCCESS! -- 16 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestSpecSyncer (19.35s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/manager/spec 19.436s failed to get CustomResourceDefinition for subscriptionreports.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptionreports.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-7m89ydg2:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for subscriptions.apps.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "subscriptions.apps.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-7m89ydg2:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scopefailed to get CustomResourceDefinition for policies.policy.open-cluster-management.io: customresourcedefinitions.apiextensions.k8s.io "policies.policy.open-cluster-management.io" is forbidden: User "system:serviceaccount:ci-op-7m89ydg2:default" cannot get resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope=== RUN TestDbsyncer Running Suite: Status dbsyncer Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/status ============================================================================================================================ Random Seed: 1755477073 Will run 32 of 32 specs The files belonging to this database system will be owned by user "1002610000". This user must also own the server process. The database cluster will be initialized with locale "C". The default database encoding has accordingly been set to "SQL_ASCII". The default text search configuration will be set to "english". Data page checksums are disabled. creating directory /tmp/tmp/embedded-postgres-go-57281/extracted/data ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... UTC creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok Success. You can now start the database server using: /tmp/tmp/embedded-postgres-go-57281/extracted/bin/pg_ctl -D /tmp/tmp/embedded-postgres-go-57281/extracted/data -l logfile start waiting for server to start....2025-08-18 00:31:27.405 UTC [26404] LOG: starting PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit 2025-08-18 00:31:27.405 UTC [26404] LOG: listening on IPv6 address "::1", port 57281 2025-08-18 00:31:27.405 UTC [26404] LOG: listening on IPv4 address "127.0.0.1", port 57281 2025-08-18 00:31:27.406 UTC [26404] LOG: listening on Unix socket "/tmp/.s.PGSQL.57281" 2025-08-18 00:31:27.408 UTC [26407] LOG: database system was shut down at 2025-08-18 00:31:27 UTC 2025-08-18 00:31:27.410 UTC [26404] LOG: database system is ready to accept connections done server started script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. script 1.upgrade.sql executed successfully. script 1.schemas.sql executed successfully. script 2.tables.sql executed successfully. script 3.functions.sql executed successfully. script 4.trigger.sql executed successfully. 2025-08-18T00:31:27.728Z INFO consumer/generic_consumer.go:89 transport consumer with go chan receiver 2025-08-18T00:31:27.729Z INFO dispatcher/transport_dispatcher.go:42 transport dispatcher starts dispatching received events... 2025-08-18T00:31:27.729Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:27.729Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:27.729Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:27.729Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.completecompliance"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.placementrule.spec"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.security.alertcounts"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.managedhub.heartbeat"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:48 registering hybrid element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:48 registering hybrid element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcompliance"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcompletecompliance"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.placementdecision"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.subscription.report"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.managedhub.info"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.event.managedcluster"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.placementrule.localspec"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.compliance"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.placement.spec"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.subscription.status"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.managedclustermigration"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.event.localrootpolicy"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.event.localreplicatedpolicy"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.deltacompliance"} 2025-08-18T00:31:27.729Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.minicompliance"} 2025-08-18T00:31:27.729Z INFO hub1.complete.placementdecision conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} 2025-08-18T00:31:27.730Z INFO statistics/statistics.go:98 starting statistics 2025-08-18T00:31:27.730Z INFO dispatcher/conflation_dispatcher.go:64 starting dispatcher 2025-08-18T00:31:27.730Z INFO workerpool/worker_pool.go:36 connection stats {"open connection(worker)": 1, "max": 10} 2025-08-18T00:31:27.730Z INFO workerpool/worker.go:44 started worker {"WorkerID": 10} 2025-08-18T00:31:27.730Z INFO workerpool/worker.go:44 started worker {"WorkerID": 1} 2025-08-18T00:31:27.730Z INFO workerpool/worker.go:44 started worker {"WorkerID": 2} 2025-08-18T00:31:27.730Z INFO workerpool/worker.go:44 started worker {"WorkerID": 3} 2025-08-18T00:31:27.730Z INFO workerpool/worker.go:44 started worker {"WorkerID": 4} 2025-08-18T00:31:27.730Z INFO workerpool/worker.go:44 started worker {"WorkerID": 5} 2025-08-18T00:31:27.730Z INFO workerpool/worker.go:44 started worker {"WorkerID": 6} 2025-08-18T00:31:27.730Z INFO workerpool/worker.go:44 started worker {"WorkerID": 7} 2025-08-18T00:31:27.730Z INFO workerpool/worker.go:44 started worker {"WorkerID": 8} 2025-08-18T00:31:27.730Z INFO workerpool/worker.go:44 started worker {"WorkerID": 9} PlacementDecision: hub1 testPlacementDecision •2025-08-18T00:31:37.829Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:37.829Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:37.829Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:37.829Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:37.829Z INFO hub1.complete.placementrule.localspec conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} hub1 f47ac10b-58cc-4372-a567-0e02b2c3d479 {"spec": {"schedulerName": "global-hub"}, "status": {}, "metadata": {"uid": "f47ac10b-58cc-4372-a567-0e02b2c3d479", "name": "test-placementrule-1", "namespace": "default", "creationTimestamp": null}} •2025-08-18T00:31:37.932Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:37.932Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:37.932Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:37.932Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:37.932Z INFO hub1.complete.subscription.report conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} SubscriptionReport: hub1 testAppReport •2025-08-18T00:31:38.034Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:38.034Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:38.034Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:38.034Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:38.034Z INFO hub1.complete.subscription.status conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} SubscriptionReport: hub1 testAppSbu •2025-08-18T00:31:38.139Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:38.139Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:38.139Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:38.139Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:38.139Z INFO hub1.complete.policy.compliance conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} Compliance: ID(b8b3e164-377e-4be1-a870-992265f31f7c) hub1/cluster1 unknown Compliance: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster1 compliant Compliance: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster2 non_compliant Compliance: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster4 pending •2025-08-18T00:31:38.241Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:38.241Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:38.241Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:38.241Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:38.241Z INFO hub1.complete.policy.completecompliance conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} Complete(Same): id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster1 compliant Complete(Same): id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster2 non_compliant Complete(Same): id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster4 compliant •2025-08-18T00:31:43.243Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.243Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.243Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:43.243Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax Complete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster1 compliant Complete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster2 non_compliant Complete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster4 compliant Complete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster1 non_compliant Complete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster2 compliant Complete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1/cluster4 pending •S2025-08-18T00:31:43.345Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.345Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.345Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:43.345Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:43.345Z INFO hub1.complete.policy.minicompliance conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} MinimalCompliance: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea6) hub1 3 2 •2025-08-18T00:31:43.448Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.448Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.448Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:43.448Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:43.448Z INFO hub1.complete.managedhub.info conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} hub1 00000000-0000-0000-0000-000000000001 {"clusterId": "00000000-0000-0000-0000-000000000001", "consoleURL": "console-openshift-console.apps.test-cluster", "grafanaURL": "", "mchVersion": ""} •2025-08-18T00:31:43.450Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.450Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.450Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:43.450Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:43.450Z INFO conflator/element_hybrid.go:52 resetting stream element version {"type": "policy.localspec", "version": "0.1"} 2025-08-18T00:31:43.452Z ERROR policy/local_policy_spec_handler.go:220 failed to get cluster info from db - github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicySpecHandler).postPolicyToInventoryApi /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_policy_spec_handler.go:220 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicySpecHandler).handleEvent /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_policy_spec_handler.go:145 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:88 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:86 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 PolicySpec Creating hub1 2971a010-b7c6-45c0-9578-c6f90a7def91 {"spec": {"disabled": false, "policy-templates": null}, "status": {}, "metadata": {"uid": "2971a010-b7c6-45c0-9578-c6f90a7def91", "name": "testLocalPolicy1", "namespace": "default", "creationTimestamp": null}} PolicySpec Creating hub1 e67c0920-8e64-408a-b53f-e5edc5a8687a {"spec": {"disabled": false, "policy-templates": null}, "status": {}, "metadata": {"uid": "e67c0920-8e64-408a-b53f-e5edc5a8687a", "name": "testLocalPolicy2", "namespace": "default", "creationTimestamp": null}} •2025-08-18T00:31:43.553Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.553Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.553Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:43.553Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:43.558Z ERROR policy/local_policy_spec_handler.go:220 failed to get cluster info from db - github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicySpecHandler).postPolicyToInventoryApi /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_policy_spec_handler.go:220 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicySpecHandler).handleEvent /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_policy_spec_handler.go:145 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:88 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:86 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 •2025-08-18T00:31:43.655Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.655Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.655Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:43.655Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:43.657Z ERROR policy/local_policy_spec_handler.go:220 failed to get cluster info from db - github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicySpecHandler).postPolicyToInventoryApi /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_policy_spec_handler.go:220 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicySpecHandler).handleEvent /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_policy_spec_handler.go:145 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:88 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:86 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 •2025-08-18T00:31:43.758Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.758Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.758Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:43.758Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:43.760Z ERROR policy/local_policy_spec_handler.go:220 failed to get cluster info from db - github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicySpecHandler).postPolicyToInventoryApi /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_policy_spec_handler.go:220 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicySpecHandler).handleEvent /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_policy_spec_handler.go:145 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:88 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:86 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 •2025-08-18T00:31:43.861Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.862Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.862Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:43.862Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.completecompliance"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.placementrule.spec"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.security.alertcounts"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.managedhub.heartbeat"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:48 registering hybrid element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:48 registering hybrid element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcompliance"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcompletecompliance"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.placementdecision"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.subscription.report"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.managedhub.info"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.event.managedcluster"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.placementrule.localspec"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.compliance"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.placement.spec"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.subscription.status"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.managedclustermigration"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.event.localrootpolicy"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.event.localreplicatedpolicy"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:43 registering delta element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.deltacompliance"} 2025-08-18T00:31:43.862Z INFO conflator/conflation_unit.go:39 registering complete element {"eventType": "io.open-cluster-management.operator.multiclusterglobalhubs.policy.minicompliance"} 2025-08-18T00:31:43.862Z INFO conflator/element_delta.go:50 resetting delta element version {"type": "event.managedcluster", "version": "0.1"} >> cluster-event-cluster1 13b2e003-2bdf-4c82-9bdf-f1aa7ccf608d managed-cluster1.17cd5c3642c43a8a 2025-08-18 00:31:43.861524 +0000 +0000 •>> cluster-event-cluster1 13b2e003-2bdf-4c82-9bdf-f1aa7ccf607c managed-cluster1.17cd5c3642c43a8a 2025-08-18 00:31:43.861524 +0000 +0000 •2025-08-18T00:31:43.966Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.966Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:43.966Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:43.966Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:43.966Z INFO hub1.complete.placementrule.spec conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} PlacementRule: hub1 testPlacementRule •2025-08-18T00:31:44.069Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:44.069Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:44.069Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:44.069Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:44.069Z INFO conflator/element_delta.go:50 resetting delta element version {"type": "event.localrootpolicy", "version": "0.1"} hub1 policy-limitrange.17b8363660d39188 Policy local-policy-namespace/policy-limitrange was propagated to cluster kind-hub2-cluster1/kind-hub2-cluster1 •2025-08-18T00:31:44.172Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:44.172Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:44.172Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:44.172Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:44.172Z INFO conflator/element_hybrid.go:52 resetting stream element version {"type": "managedcluster", "version": "0.1"} ManagedCluster Creating hub1 3f406177-34b2-4852-88dd-ff2809680331 ManagedCluster Creating hub1 3f406177-34b2-4852-88dd-ff2809680332 ManagedCluster Creating hub1 3f406177-34b2-4852-88dd-ff2809680333 •2025-08-18T00:31:44.274Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:44.274Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:44.274Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:44.274Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax ManagedCluster Resync hub1 3f406177-34b2-4852-88dd-ff2809680331 ManagedCluster Resync hub1 3f406177-34b2-4852-88dd-ff2809680332 ManagedCluster Resync hub1 3f406177-34b2-4852-88dd-ff2809680333 •2025-08-18T00:31:44.376Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:44.376Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:44.376Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:44.376Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax ManagedCluster Delete [{hub1 3f406177-34b2-4852-88dd-ff2809680332 {"spec": {"hubAcceptsClient": false}, "status": {"version": {}, "conditions": null, "clusterClaims": [{"name": "id.k8s.io", "value": "3f406177-34b2-4852-88dd-ff2809680332"}]}, "metadata": {"uid": "3f406177-34b2-4852-88dd-ff2809680332", "name": "cluster2", "namespace": "cluster2", "creationTimestamp": null}} none 2025-08-18 00:31:44.172772 +0000 +0000 2025-08-18 00:31:44.172772 +0000 +0000 {0001-01-01 00:00:00 +0000 UTC false}} {hub1 3f406177-34b2-4852-88dd-ff2809680333 {"spec": {"hubAcceptsClient": false}, "status": {"version": {}, "conditions": null, "clusterClaims": [{"name": "id.k8s.io", "value": "3f406177-34b2-4852-88dd-ff2809680333"}]}, "metadata": {"uid": "3f406177-34b2-4852-88dd-ff2809680333", "name": "cluster3", "namespace": "cluster3", "creationTimestamp": null}} none 2025-08-18 00:31:44.172772 +0000 +0000 2025-08-18 00:31:44.172772 +0000 +0000 {0001-01-01 00:00:00 +0000 UTC false}}] ManagedCluster Delete [{hub1 3f406177-34b2-4852-88dd-ff2809680333 {"spec": {"hubAcceptsClient": false}, "status": {"version": {}, "conditions": null, "clusterClaims": [{"name": "id.k8s.io", "value": "3f406177-34b2-4852-88dd-ff2809680333"}]}, "metadata": {"uid": "3f406177-34b2-4852-88dd-ff2809680333", "name": "cluster3", "namespace": "cluster3", "creationTimestamp": null}} none 2025-08-18 00:31:44.172772 +0000 +0000 2025-08-18 00:31:44.172772 +0000 +0000 {0001-01-01 00:00:00 +0000 UTC false}}] •2025-08-18T00:31:44.478Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:44.478Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:44.478Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:44.478Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax ManagedCluster Delete [{hub1 3f406177-34b2-4852-88dd-ff2809680333 {"spec": {"hubAcceptsClient": false}, "status": {"version": {}, "conditions": null, "clusterClaims": [{"name": "id.k8s.io", "value": "3f406177-34b2-4852-88dd-ff2809680333"}]}, "metadata": {"uid": "3f406177-34b2-4852-88dd-ff2809680333", "name": "cluster3", "namespace": "cluster3", "creationTimestamp": null}} none 2025-08-18 00:31:44.172772 +0000 +0000 2025-08-18 00:31:44.172772 +0000 +0000 {0001-01-01 00:00:00 +0000 UTC false}}] ManagedCluster Delete [] •2025-08-18T00:31:44.583Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:44.583Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:44.583Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:44.583Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:44.583Z INFO hub1.complete.policy.localcompliance conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} 2025-08-18T00:31:44.583Z INFO policy.localcompliance policy/local_compliance_handler.go:61 handler start type io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcomplianceLH hub1version 0.1 LocalCompliance: ID(b8b3e164-377e-4be1-a870-992265f31f7c) hub1/cluster1 unknown LocalCompliance: expiredCount 1 LocalCompliance: addedCount 0 2025-08-18T00:31:44.586Z WARN policy.localcompliance policy/local_compliance_handler.go:224 failed to get cluster info from db - github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.syncInventory /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_compliance_handler.go:224 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicyComplianceHandler).handleCompliance /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_compliance_handler.go:148 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicyComplianceHandler).handleEventWrapper /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_compliance_handler.go:55 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).fullBundleHandle.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:121 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).fullBundleHandle /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:119 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:76 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 2025-08-18T00:31:44.587Z INFO policy.localcompliance policy/local_compliance_handler.go:183 handler finishedtypeio.open-cluster-management.operator.multiclusterglobalhubs.policy.localcomplianceLHhub1version0.1 LocalCompliance: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster1 compliant LocalCompliance: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster2 non_compliant LocalCompliance: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster4 pending LocalCompliance: expiredCount 0 LocalCompliance: addedCount 3 •2025-08-18T00:31:44.686Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:44.686Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:44.686Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:44.686Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:44.686Z INFO policy.localcompliance policy/local_compliance_handler.go:61 handler start type io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcomplianceLH hub1version 1.2 2025-08-18T00:31:44.690Z WARN policy.localcompliance policy/local_compliance_handler.go:224 failed to get cluster info from db - github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.syncInventory /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_compliance_handler.go:224 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicyComplianceHandler).handleCompliance /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_compliance_handler.go:148 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicyComplianceHandler).handleEventWrapper /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_compliance_handler.go:55 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).fullBundleHandle.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:121 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).fullBundleHandle /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:119 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:76 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 2025-08-18T00:31:44.690Z INFO policy.localcompliance policy/local_compliance_handler.go:183 handler finishedtypeio.open-cluster-management.operator.multiclusterglobalhubs.policy.localcomplianceLHhub1version1.2 LocalCompliance Resync: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster1 compliant LocalCompliance Resync: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster2 non_compliant LocalCompliance Resync: ID(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster5 pending •2025-08-18T00:31:49.688Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:49.688Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:49.688Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:49.688Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:49.688Z INFO hub1.complete.policy.localcompletecompliance conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} 2025-08-18T00:31:49.691Z WARN policy.localcompletecompliance policy/local_compliance_handler.go:224 failed to get cluster info from db - github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.syncInventory /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_compliance_handler.go:224 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicyCompleteHandler).handleCompleteCompliance /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_complete_handler.go:169 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicyCompleteHandler).handleEventWrapper /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_complete_handler.go:56 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).fullBundleHandle.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:121 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).fullBundleHandle /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:119 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:76 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster1 compliant LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster2 non_compliant LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster5 compliant •2025-08-18T00:31:54.689Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:54.689Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:54.689Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:54.689Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:54.689Z INFO hub1.complete.policy.localcompletecompliance conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster1 compliant LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster2 non_compliant LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster5 compliant 2025-08-18T00:31:54.694Z WARN policy.localcompletecompliance policy/local_compliance_handler.go:224 failed to get cluster info from db - github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.syncInventory /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_compliance_handler.go:224 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicyCompleteHandler).handleCompleteCompliance /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_complete_handler.go:169 github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy.(*localPolicyCompleteHandler).handleEventWrapper /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/handlers/policy/local_complete_handler.go:56 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).fullBundleHandle.func1 /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:121 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/wait/poll.go:48 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).fullBundleHandle /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:119 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).handleJob /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:76 github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool.(*Worker).start /go/src/github.com/stolostron/multicluster-global-hub/manager/pkg/status/conflator/workerpool/worker.go:58 LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster1 non_compliant LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster5 pending LocalComplete: id(d9347b09-bb46-4e2b-91ea-513e83ab9ea8) hub1/cluster2 compliant •2025-08-18T00:31:54.793Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:54.793Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:54.793Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:54.793Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:54.793Z INFO hub1.complete.security.alertcounts conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} 2025/08/18 00:31:54 /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/status/security_alert_counts_handler_test.go:63 record not found [2.244ms] [rows:0] SELECT * FROM "security"."alert_counts" ORDER BY "alert_counts"."hub_name" LIMIT 1 •2025-08-18T00:31:54.902Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:54.902Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:54.902Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:54.902Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025/08/18 00:31:54 /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/status/security_alert_counts_handler_test.go:100 record not found [1.084ms] [rows:0] SELECT * FROM "security"."alert_counts" WHERE "alert_counts"."hub_name" = 'hub1' AND "alert_counts"."source" = 'rhacs-operator/stackrox-central-services' ORDER BY "alert_counts"."hub_name" LIMIT 1 2025-08-18T00:31:55.006Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:55.006Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:55.006Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:55.006Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025/08/18 00:31:55 /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/status/security_alert_counts_handler_test.go:131 record not found [0.534ms] [rows:0] SELECT * FROM "security"."alert_counts" WHERE "alert_counts"."hub_name" = 'hub1' AND "alert_counts"."source" = 'other-namespace/other-name' ORDER BY "alert_counts"."hub_name" LIMIT 1 •2025-08-18T00:31:55.120Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:55.120Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:55.120Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:55.120Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax •2025-08-18T00:31:55.232Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:55.233Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:55.233Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:55.233Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:55.233Z INFO conflator/element_delta.go:50 resetting delta element version {"type": "event.localreplicatedpolicy", "version": "0.1"} LocalPolicyEvent: local-policy-namespace.policy-limitrange.17b0db242743213210 f302ce61-98e7-4d63-8dd2-65951e32fd95 non_compliant •2025-08-18T00:31:55.337Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:55.337Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:55.337Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:55.337Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:55.337Z INFO hub1.complete.placement.spec conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} Placement: hub1 testPlacements •2025-08-18T00:31:55.439Z INFO metadata/threshold_metadata.go:40 failed to parse topic from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:55.439Z INFO metadata/threshold_metadata.go:44 failed to parse partition from eventerrorinvalid CloudEvents value: 2025-08-18T00:31:55.439Z INFO metadata/threshold_metadata.go:49 failed to get offset string from eventoffset 2025-08-18T00:31:55.439Z INFO metadata/threshold_metadata.go:54 failed to parse offset into int64 from eventoffseterrorstrconv.ParseInt: parsing "": invalid syntax 2025-08-18T00:31:55.439Z INFO hub1.complete.managedhub.heartbeat conflator/element_complete.go:68 resetting complete element processed version {"version": "0.1"} hub1 2025-08-18 00:31:55.439416 +0000 +0000 active •2025-08-18T00:31:55.541Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:31:55.541Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:31:55.541Z INFO conflator/conflation_committer.go:55 context canceled, exiting committer... 2025-08-18T00:31:55.541Z INFO dispatcher/conflation_dispatcher.go:69 stopped dispatcher 2025-08-18T00:31:55.541Z INFO statistics/statistics.go:108 stopped statistics 2025-08-18T00:31:55.541Z INFO dispatcher/transport_dispatcher.go:47 stopped dispatching events 2025-08-18T00:31:55.541Z INFO consumer/generic_consumer.go:179 receiver stopped 2025-08-18T00:31:55.541Z INFO manager/internal.go:550 Stopping and waiting for caches 2025-08-18T00:31:55.541Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:31:55.541Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:31:55.541Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager waiting for server to shut down...2025-08-18 00:31:55.542 UTC [26404] LOG: received fast shutdown request .2025-08-18 00:31:55.542 UTC [26404] LOG: aborting any active transactions 2025-08-18 00:31:55.542 UTC [26455] FATAL: terminating connection due to administrator command 2025-08-18 00:31:55.542 UTC [26454] FATAL: terminating connection due to administrator command 2025-08-18 00:31:55.542 UTC [26452] FATAL: terminating connection due to administrator command 2025-08-18 00:31:55.542 UTC [26451] FATAL: terminating connection due to administrator command 2025-08-18 00:31:55.542 UTC [26450] FATAL: terminating connection due to administrator command 2025-08-18 00:31:55.542 UTC [26413] FATAL: terminating connection due to administrator command 2025-08-18 00:31:55.543 UTC [26414] FATAL: terminating connection due to administrator command 2025-08-18 00:31:55.544 UTC [26404] LOG: background worker "logical replication launcher" (PID 26411) exited with exit code 1 2025-08-18 00:31:55.544 UTC [26453] FATAL: terminating connection due to administrator command 2025-08-18 00:31:55.546 UTC [26405] LOG: shutting down 2025-08-18 00:31:55.546 UTC [26405] LOG: checkpoint starting: shutdown immediate 2025-08-18 00:31:55.564 UTC [26405] LOG: checkpoint complete: wrote 1094 buffers (6.7%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.015 s, sync=0.003 s, total=0.018 s; sync files=493, longest=0.001 s, average=0.001 s; distance=5344 kB, estimate=5344 kB; lsn=0/1A168F0, redo lsn=0/1A168F0 2025-08-18 00:31:55.574 UTC [26404] LOG: database system is shut down done server stopped Ran 31 of 32 Specs in 43.322 seconds SUCCESS! -- 31 Passed | 0 Failed | 0 Pending | 1 Skipped --- PASS: TestDbsyncer (43.32s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/manager/status 43.437s === RUN TestControllers Running Suite: Controller Integration Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/manager/webhook ==================================================================================================================================== Random Seed: 1755477073 Will run 4 of 4 specs 2025-08-18T00:31:22.230Z INFO controller-runtime.webhook webhook/server.go:183 Registering webhook {"path": "/mutating"} 2025-08-18T00:31:22.230Z INFO controller-runtime.webhook webhook/server.go:191 Starting webhook server 2025-08-18T00:31:22.231Z INFO controller-runtime.certwatcher certwatcher/certwatcher.go:161 Updated current TLS certificate 2025-08-18T00:31:22.231Z INFO controller-runtime.webhook webhook/server.go:242 Serving webhook server {"host": "127.0.0.1", "port": 41379} 2025-08-18T00:31:22.231Z INFO controller-runtime.certwatcher certwatcher/certwatcher.go:115 Starting certificate watcher 2025-08-18T00:31:22.270Z INFO webhook/admission_handler.go:34 admission webhook is called, name:, namespace:default, kind:Placement, operation:CREATE 2025-08-18T00:31:22.278Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.decisionStrategy" 2025-08-18T00:31:22.278Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "spec.spreadPolicy" 2025-08-18T00:31:22.278Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "status.decisionGroups" •2025-08-18T00:31:22.282Z INFO webhook/admission_handler.go:34 admission webhook is called, name:, namespace:default, kind:Placement, operation:CREATE •2025-08-18T00:31:22.295Z INFO webhook/admission_handler.go:34 admission webhook is called, name:, namespace:default, kind:PlacementRule, operation:CREATE •2025-08-18T00:31:22.315Z INFO webhook/admission_handler.go:34 admission webhook is called, name:, namespace:default, kind:PlacementRule, operation:CREATE •2025-08-18T00:31:22.327Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:31:22.327Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:31:22.327Z INFO manager/internal.go:550 Stopping and waiting for caches 2025-08-18T00:31:22.327Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:31:22.327Z INFO controller-runtime.webhook webhook/server.go:249 Shutting down webhook server with timeout of 1 minute 2025-08-18T00:31:22.331Z ERROR controller-runtime.certwatcher certwatcher/certwatcher.go:185 error re-watching file {"error": "no such file or directory"} sigs.k8s.io/controller-runtime/pkg/certwatcher.(*CertWatcher).handleEvent /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/certwatcher/certwatcher.go:185 sigs.k8s.io/controller-runtime/pkg/certwatcher.(*CertWatcher).Watch /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/certwatcher/certwatcher.go:133 2025-08-18T00:31:22.331Z ERROR controller-runtime.certwatcher certwatcher/certwatcher.go:190 error re-reading certificate {"error": "open /tmp/envtest-serving-certs-702919251/tls.crt: no such file or directory"} sigs.k8s.io/controller-runtime/pkg/certwatcher.(*CertWatcher).handleEvent /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/certwatcher/certwatcher.go:190 sigs.k8s.io/controller-runtime/pkg/certwatcher.(*CertWatcher).Watch /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/certwatcher/certwatcher.go:133 2025-08-18T00:31:23.397Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:31:23.397Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager Ran 4 of 4 Specs in 10.152 seconds SUCCESS! -- 4 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestControllers (10.15s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/manager/webhook 10.233s === RUN TestControllers Running Suite: Controller Integration Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator ============================================================================================================================= Random Seed: 1755477073 Will run 2 of 2 specs The files belonging to this database system will be owned by user "1002610000". This user must also own the server process. The database cluster will be initialized with locale "C". The default database encoding has accordingly been set to "SQL_ASCII". The default text search configuration will be set to "english". Data page checksums are disabled. creating directory /tmp/tmp/embedded-postgres-go-23283/extracted/data ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... UTC creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok Success. You can now start the database server using: /tmp/tmp/embedded-postgres-go-23283/extracted/bin/pg_ctl -D /tmp/tmp/embedded-postgres-go-23283/extracted/data -l logfile start waiting for server to start....2025-08-18 00:31:27.131 UTC [26391] LOG: starting PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit 2025-08-18 00:31:27.131 UTC [26391] LOG: listening on IPv6 address "::1", port 23283 2025-08-18 00:31:27.131 UTC [26391] LOG: listening on IPv4 address "127.0.0.1", port 23283 2025-08-18 00:31:27.131 UTC [26391] LOG: listening on Unix socket "/tmp/.s.PGSQL.23283" 2025-08-18 00:31:27.134 UTC [26394] LOG: database system was shut down at 2025-08-18 00:31:27 UTC 2025-08-18 00:31:27.136 UTC [26391] LOG: database system is ready to accept connections done server started I0818 00:31:27.310401 25845 leaderelection.go:257] attempting to acquire leader lease default/549a8919.open-cluster-management.io... I0818 00:31:27.320495 25845 leaderelection.go:271] successfully acquired lease default/549a8919.open-cluster-management.io 2025-08-18T00:31:27.320Z INFO controller/controller.go:175 Starting EventSource {"controller": "MetaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:31:27.320Z INFO controller/controller.go:175 Starting EventSource {"controller": "MetaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha1.MulticlusterGlobalHubAgent"} 2025-08-18T00:31:27.320Z INFO controller/controller.go:183 Starting Controller {"controller": "MetaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.427Z INFO controller/controller.go:217 Starting workers {"controller": "MetaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:31:27.535Z INFO manager/manager_reconciler.go:100 start manager controller 2025-08-18T00:31:27.535Z INFO storage/storage_reconciler.go:101 start storage controller 2025-08-18T00:31:27.535Z INFO managedhub/managedhub_controller.go:64 start managedhub controller 2025-08-18T00:31:27.535Z INFO controller/controller.go:183 Starting Controller {"controller": "ManagedHubController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.535Z INFO controller/controller.go:217 Starting workers {"controller": "ManagedHubController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:31:27.535Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:31:27.535Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Secret"} 2025-08-18T00:31:27.535Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:31:27.535Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.StatefulSet"} 2025-08-18T00:31:27.535Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ServiceAccount"} 2025-08-18T00:31:27.535Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.PrometheusRule"} 2025-08-18T00:31:27.535Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ServiceMonitor"} 2025-08-18T00:31:27.535Z INFO controller/controller.go:183 Starting Controller {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.535Z INFO controller/controller.go:132 Starting EventSource {"controller": "ManagedHubController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:31:27.535Z INFO controller/controller.go:132 Starting EventSource {"controller": "ManagedHubController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ManagedCluster"} 2025-08-18T00:31:27.535Z INFO managedhub/managedhub_controller.go:72 inited managedhub controller 2025-08-18T00:31:27.535Z INFO addon/default_agent_controller.go:71 start default agent controller 2025-08-18T00:31:27.535Z INFO addon/addon_manager.go:66 start addon manager controller 2025-08-18T00:31:27.535Z INFO mceaddons/mce_addons_controller.go:60 start mce addons controller 2025-08-18T00:31:27.535Z INFO webhook/webhook_controller.go:63 start webhook controller 2025-08-18T00:31:27.535Z INFO webhook/webhook_controller.go:73 inited webhook controller 2025-08-18T00:31:27.535Z INFO transporter/transport_reconciler.go:57 start transport controller 2025-08-18T00:31:27.535Z INFO controller/controller.go:183 Starting Controller {"controller": "transport", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.535Z INFO controller/controller.go:217 Starting workers {"controller": "transport", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:31:27.535Z INFO controller/controller.go:175 Starting EventSource {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:31:27.535Z INFO controller/controller.go:175 Starting EventSource {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha1.AddOnDeploymentConfig"} 2025-08-18T00:31:27.535Z INFO controller/controller.go:175 Starting EventSource {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1beta2.ManagedClusterSetBinding"} 2025-08-18T00:31:27.535Z INFO controller/controller.go:175 Starting EventSource {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.MutatingWebhookConfiguration"} 2025-08-18T00:31:27.535Z INFO controller/controller.go:175 Starting EventSource {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1beta1.Placement"} 2025-08-18T00:31:27.536Z INFO controller/controller.go:183 Starting Controller {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.536Z INFO controller/controller.go:132 Starting EventSource {"controller": "transport", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:31:27.536Z INFO controller/controller.go:132 Starting EventSource {"controller": "transport", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Secret"} 2025-08-18T00:31:27.536Z INFO transporter/transport_reconciler.go:65 inited transport controller 2025-08-18T00:31:27.536Z INFO backup/backup_start.go:78 start backup controller 2025-08-18T00:31:27.536Z INFO backup/backup_start.go:90 inited backup controller 2025-08-18T00:31:27.536Z INFO controller/controller.go:183 Starting Controller {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:31:27.536Z INFO controller/controller.go:217 Starting workers {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap", "worker count": 1} 2025-08-18T00:31:27.536Z INFO controller/controller.go:175 Starting EventSource {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:31:27.536Z INFO controller/controller.go:175 Starting EventSource {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Secret"} 2025-08-18T00:31:27.536Z INFO controller/controller.go:175 Starting EventSource {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:31:27.536Z INFO controller/controller.go:175 Starting EventSource {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.PersistentVolumeClaim"} 2025-08-18T00:31:27.536Z INFO controller/controller.go:175 Starting EventSource {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.MultiClusterHub"} 2025-08-18T00:31:27.536Z INFO controller/controller.go:183 Starting Controller {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.536Z INFO controller/controller.go:132 Starting EventSource {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:31:27.536Z INFO controller/controller.go:132 Starting EventSource {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap", "source": "kind source: *v1.Secret"} 2025-08-18T00:31:27.536Z INFO storage/postgres_user_reconciler.go:59 start postgres users controller 2025-08-18T00:31:27.536Z INFO agent/local_agent_controller.go:48 start local agent controller 2025-08-18T00:31:27.536Z INFO acm/resources.go:96 start acm controller 2025-08-18T00:31:27.536Z INFO acm/resources.go:122 inited acm controller 2025-08-18T00:31:27.536Z INFO controller/controller.go:175 Starting EventSource {"controller": "acm-controller", "source": "kind source: *v1.PartialObjectMetadata"} 2025-08-18T00:31:27.536Z INFO controller/controller.go:183 Starting Controller {"controller": "acm-controller"} 2025-08-18T00:31:27.548Z INFO agent/local_agent_controller.go:48 start local agent controller 2025-08-18T00:31:27.548Z INFO manager/manager_reconciler.go:100 start manager controller 2025-08-18T00:31:27.548Z INFO addon/default_agent_controller.go:71 start default agent controller 2025-08-18T00:31:27.548Z INFO addon/addon_manager.go:66 start addon manager controller 2025-08-18T00:31:27.548Z INFO mceaddons/mce_addons_controller.go:60 start mce addons controller 2025-08-18T00:31:27.649Z INFO controller/controller.go:217 Starting workers {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:31:27.649Z INFO storage/postgres_statefulset.go:65 the postgres customized config: •2025-08-18T00:31:27.661Z INFO KubeAPIWarningLogger log/warning_handler.go:65 metadata.finalizers: "fz": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers 2025-08-18T00:31:27.668Z INFO addon/default_agent_controller.go:71 start default agent controller 2025-08-18T00:31:27.668Z INFO addon/addon_manager.go:66 start addon manager controller 2025-08-18T00:31:27.668Z INFO mceaddons/mce_addons_controller.go:60 start mce addons controller 2025-08-18T00:31:27.668Z INFO agent/local_agent_controller.go:48 start local agent controller 2025-08-18T00:31:27.668Z INFO manager/manager_reconciler.go:100 start manager controller 2025-08-18T00:31:27.668Z INFO addon/default_agent_controller.go:71 start default agent controller 2025-08-18T00:31:27.668Z INFO addon/addon_manager.go:66 start addon manager controller 2025-08-18T00:31:27.668Z INFO mceaddons/mce_addons_controller.go:60 start mce addons controller 2025-08-18T00:31:27.668Z INFO agent/local_agent_controller.go:48 start local agent controller 2025-08-18T00:31:27.668Z INFO manager/manager_reconciler.go:100 start manager controller 2025-08-18T00:31:27.672Z INFO controller/controller.go:217 Starting workers {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:31:27.674Z INFO controller/controller.go:217 Starting workers {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:31:27.677Z INFO webhook/webhook_controller.go:78 webhookController resource removed: true 2025-08-18T00:31:27.677Z INFO transporter/transport_reconciler.go:49 TransportController resource removed: true 2025-08-18T00:31:27.677Z INFO managedhub/managedhub_controller.go:53 managedHubController resource removed: false 2025-08-18T00:31:27.678Z INFO webhook/webhook_controller.go:78 webhookController resource removed: true 2025-08-18T00:31:27.678Z INFO transporter/transport_reconciler.go:49 TransportController resource removed: true 2025-08-18T00:31:27.678Z INFO managedhub/managedhub_controller.go:53 managedHubController resource removed: false 2025-08-18T00:31:27.678Z INFO controller/controller.go:217 Starting workers {"controller": "acm-controller", "worker count": 1} 2025-08-18T00:31:27.695Z INFO protocol/strimzi_kafka_controller.go:58 KafkaController resource removed: false 2025-08-18T00:31:27.695Z INFO transporter/transport_reconciler.go:136 Wait kafka resource removed 2025-08-18T00:31:27.695Z INFO protocol/strimzi_kafka_controller.go:58 KafkaController resource removed: false 2025-08-18T00:31:27.695Z INFO transporter/transport_reconciler.go:136 Wait kafka resource removed 2025-08-18T00:31:27.702Z INFO managedhub/managedhub_controller.go:53 managedHubController resource removed: false 2025-08-18T00:31:27.702Z INFO managedhub/managedhub_controller.go:53 managedHubController resource removed: false 2025-08-18T00:31:27.702Z INFO protocol/strimzi_kafka_controller.go:58 KafkaController resource removed: false 2025-08-18T00:31:27.702Z INFO transporter/transport_reconciler.go:136 Wait kafka resource removed 2025-08-18T00:31:27.705Z ERROR controller/controller.go:316 Reconciler error {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-6xr9cg"}, "namespace": "namespace-6xr9cg", "name": "test-mgh", "reconcileID": "e3386b20-7b6b-4090-ac7d-0317c13ada25", "error": "MulticlusterGlobalHub.operator.open-cluster-management.io \"test-mgh\" not found"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:27.778Z INFO storage/storage_reconciler.go:317 wait database ready, failed to connect database: failed to connect to database: failed to connect to `user=postgres database=hoh`: hostname resolving error: lookup multicluster-global-hub-postgresql.multicluster-global-hub.svc on 172.30.0.10:53: no such host 2025-08-18T00:31:27.778Z ERROR storage/storage_reconciler.go:214 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "test-mgh" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:214 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:230 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:31:27.812Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:31:27.812Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:31:27.812Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "acm-controller"} 2025-08-18T00:31:27.812Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.812Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.812Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.812Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:31:27.812Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "transport", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.812Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "ManagedHubController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.812Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "MetaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.812Z INFO controller/controller.go:239 All workers finished {"controller": "acm-controller"} 2025-08-18T00:31:27.812Z INFO controller/controller.go:239 All workers finished {"controller": "multiclusterglobalhub", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.812Z INFO controller/controller.go:239 All workers finished {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:31:27.812Z INFO controller/controller.go:239 All workers finished {"controller": "webhook-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.812Z INFO controller/controller.go:239 All workers finished {"controller": "transport", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.812Z INFO controller/controller.go:239 All workers finished {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.812Z INFO controller/controller.go:239 All workers finished {"controller": "ManagedHubController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.812Z INFO controller/controller.go:239 All workers finished {"controller": "MetaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:27.812Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:31:27.812634 25845 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1alpha1.ManagedClusterAddOn" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.812739 25845 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1alpha1.Subscription" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.812807 25845 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ServiceMonitor" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.812817 25845 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ServiceAccount" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.812891 25845 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.MultiClusterHub" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.812920 25845 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.Secret" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.812955 25845 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1beta2.ManagedClusterSetBinding" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.812981 25845 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1alpha1.MulticlusterGlobalHubAgent" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.813018 25845 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1alpha1.AddOnDeploymentConfig" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.813045 25845 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1alpha4.MulticlusterGlobalHub" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.813081 25845 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ManagedCluster" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.813161 25845 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.PartialObjectMetadata" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.813223 25845 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1beta1.Placement" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.813295 25845 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.MutatingWebhookConfiguration" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:31:27.813Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:31:27.813Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:31:27.813Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager 2025-08-18T00:31:27.813Z ERROR manager/internal.go:512 error received after stop sequence was engaged {"error": "leader election lost"} sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/manager/internal.go:512 waiting for server to shut down...2025-08-18 00:31:27.813 UTC [26391] LOG: received fast shutdown request 2025-08-18 00:31:27.813 UTC [26391] LOG: aborting any active transactions 2025-08-18 00:31:27.813 UTC [26401] FATAL: terminating connection due to administrator command 2025-08-18 00:31:27.814 UTC [26391] LOG: background worker "logical replication launcher" (PID 26397) exited with exit code 1 2025-08-18 00:31:27.815 UTC [26392] LOG: shutting down 2025-08-18 00:31:27.815 UTC [26392] LOG: checkpoint starting: shutdown immediate .2025-08-18 00:31:27.829 UTC [26392] LOG: checkpoint complete: wrote 919 buffers (5.6%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.014 s, sync=0.001 s, total=0.015 s; sync files=301, longest=0.001 s, average=0.001 s; distance=4231 kB, estimate=4231 kB; lsn=0/1900648, redo lsn=0/1900648 2025-08-18 00:31:27.839 UTC [26391] LOG: database system is shut down done server stopped Ran 2 of 2 Specs in 15.702 seconds SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestControllers (15.70s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/operator 15.806s === RUN TestControllers Running Suite: Controller Integration Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers ========================================================================================================================================= Random Seed: 1755477073 Will run 15 of 15 specs The files belonging to this database system will be owned by user "1002610000". This user must also own the server process. The database cluster will be initialized with locale "C". The default database encoding has accordingly been set to "SQL_ASCII". The default text search configuration will be set to "english". Data page checksums are disabled. creating directory /tmp/tmp/embedded-postgres-go-51383/extracted/data ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... UTC creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok Success. You can now start the database server using: /tmp/tmp/embedded-postgres-go-51383/extracted/bin/pg_ctl -D /tmp/tmp/embedded-postgres-go-51383/extracted/data -l logfile start waiting for server to start....2025-08-18 00:31:27.899 UTC [26418] LOG: starting PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit 2025-08-18 00:31:27.900 UTC [26418] LOG: listening on IPv6 address "::1", port 51383 2025-08-18 00:31:27.900 UTC [26418] LOG: listening on IPv4 address "127.0.0.1", port 51383 2025-08-18 00:31:27.900 UTC [26418] LOG: listening on Unix socket "/tmp/.s.PGSQL.51383" 2025-08-18 00:31:27.902 UTC [26421] LOG: database system was shut down at 2025-08-18 00:31:27 UTC 2025-08-18 00:31:27.905 UTC [26418] LOG: database system is ready to accept connections done server started I0818 00:31:28.068068 25846 leaderelection.go:257] attempting to acquire leader lease default/549a8919.open-cluster-management.io... I0818 00:31:28.072890 25846 leaderelection.go:271] successfully acquired lease default/549a8919.open-cluster-management.io 2025-08-18T00:31:28.087Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:31:28.087Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Secret"} 2025-08-18T00:31:28.087Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:31:28.087Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.StatefulSet"} 2025-08-18T00:31:28.087Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ServiceAccount"} 2025-08-18T00:31:28.087Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.PrometheusRule"} 2025-08-18T00:31:28.087Z INFO controller/controller.go:175 Starting EventSource {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ServiceMonitor"} 2025-08-18T00:31:28.087Z INFO controller/controller.go:183 Starting Controller {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:28.201Z INFO controller/controller.go:217 Starting workers {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:31:28.462Z ERROR storage/storage_reconciler.go:214 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:214 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:240 github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers.init.func7.1 /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/storage_test.go:78 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/node.go:475 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/suite.go:894 2025-08-18T00:31:28.465Z INFO controller/controller.go:175 Starting EventSource {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:31:28.465Z INFO controller/controller.go:175 Starting EventSource {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap", "source": "kind source: *v1.Secret"} 2025-08-18T00:31:28.465Z INFO controller/controller.go:183 Starting Controller {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:31:28.465Z INFO controller/controller.go:217 Starting workers {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap", "worker count": 1} 2025-08-18T00:31:28.480Z INFO storage/postgres_user_reconciler.go:358 create postgres user: test-user1 2025-08-18T00:31:28.492Z ERROR storage/storage_reconciler.go:214 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:214 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:240 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:28.492Z ERROR storage/storage_reconciler.go:214 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:214 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:240 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:28.492Z ERROR storage/storage_reconciler.go:214 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:214 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:240 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:28.531Z INFO storage/postgres_user_reconciler.go:322 database test1 created. 2025-08-18T00:31:28.542Z INFO storage/postgres_user_reconciler.go:305 granted all privileges to user test-user1 on database test1. 2025-08-18T00:31:28.599Z INFO storage/postgres_user_reconciler.go:322 database test-2 created. 2025-08-18T00:31:28.612Z INFO storage/postgres_user_reconciler.go:305 granted all privileges to user test-user1 on database test-2. 2025-08-18T00:31:28.616Z INFO storage/postgres_user_reconciler.go:242 create the postgresql user secret: postgresql-user-test-user1 2025-08-18T00:31:28.616Z INFO storage/postgres_user_reconciler.go:149 applied the postgresql users successfully! { "metadata": { "name": "postgresql-user-test-user1", "namespace": "namespace-q4h6hv", "uid": "970f6ad9-5b0c-4004-a8f9-eed31d809f3e", "resourceVersion": "371", "creationTimestamp": "2025-08-18T00:31:28Z", "labels": { "global-hub.open-cluster-management.io/managed-by": "multicluster-global-hub-custom-postgresql-users" }, "ownerReferences": [ { "apiVersion": "operator.open-cluster-management.io/v1alpha4", "kind": "MulticlusterGlobalHub", "name": "test-mgh", "uid": "fee3c3ed-8e28-427b-b21a-b99eea8d4c79", "controller": true, "blockOwnerDeletion": true } ], "managedFields": [ { "manager": "controllers.test", "operation": "Update", "apiVersion": "v1", "time": "2025-08-18T00:31:28Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:data": { ".": {}, "f:db.ca_cert": {}, "f:db.host": {}, "f:db.names": {}, "f:db.password": {}, "f:db.port": {}, "f:db.user": {} }, "f:metadata": { "f:labels": { ".": {}, "f:global-hub.open-cluster-management.io/managed-by": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"fee3c3ed-8e28-427b-b21a-b99eea8d4c79\"}": {} } }, "f:type": {} } } ] }, "data": { "db.ca_cert": "", "db.host": "bG9jYWxob3N0", "db.names": "WyJ0ZXN0MSIsICJ0ZXN0LTIiXQ==", "db.password": "MTY3Y2Q4NjViZmFh", "db.port": "NTEzODM=", "db.user": "dGVzdC11c2VyMQ==" }, "type": "Opaque" } 2025-08-18T00:31:28.705Z INFO storage/postgres_user_reconciler.go:351 postgres user 'test-user1' already exists 2025-08-18T00:31:28.706Z INFO storage/postgres_user_reconciler.go:324 database test1 already exists. 2025-08-18T00:31:28.714Z INFO storage/postgres_user_reconciler.go:305 granted all privileges to user test-user1 on database test1. 2025-08-18T00:31:28.715Z INFO storage/postgres_user_reconciler.go:324 database test-2 already exists. 2025-08-18T00:31:28.723Z INFO storage/postgres_user_reconciler.go:305 granted all privileges to user test-user1 on database test-2. 2025-08-18T00:31:28.723Z INFO storage/postgres_user_reconciler.go:252 the postgresql user secret already exists: postgresql-user-test-user1 2025-08-18T00:31:28.727Z INFO storage/postgres_user_reconciler.go:358 create postgres user: test_user2 2025-08-18T00:31:28.780Z INFO storage/postgres_user_reconciler.go:322 database test3 created. 2025-08-18T00:31:28.794Z INFO storage/postgres_user_reconciler.go:305 granted all privileges to user test_user2 on database test3. 2025-08-18T00:31:28.841Z INFO storage/postgres_user_reconciler.go:322 database test_4 created. 2025-08-18T00:31:28.856Z INFO storage/postgres_user_reconciler.go:305 granted all privileges to user test_user2 on database test_4. 2025-08-18T00:31:28.860Z INFO storage/postgres_user_reconciler.go:242 create the postgresql user secret: postgresql-user-test-user2 2025-08-18T00:31:28.860Z INFO storage/postgres_user_reconciler.go:149 applied the postgresql users successfully! { "metadata": { "name": "postgresql-user-test-user2", "namespace": "namespace-q4h6hv", "uid": "56a21e8d-a31a-428e-9b3d-3b263d010e04", "resourceVersion": "373", "creationTimestamp": "2025-08-18T00:31:28Z", "labels": { "global-hub.open-cluster-management.io/managed-by": "multicluster-global-hub-custom-postgresql-users" }, "ownerReferences": [ { "apiVersion": "operator.open-cluster-management.io/v1alpha4", "kind": "MulticlusterGlobalHub", "name": "test-mgh", "uid": "fee3c3ed-8e28-427b-b21a-b99eea8d4c79", "controller": true, "blockOwnerDeletion": true } ], "managedFields": [ { "manager": "controllers.test", "operation": "Update", "apiVersion": "v1", "time": "2025-08-18T00:31:28Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:data": { ".": {}, "f:db.ca_cert": {}, "f:db.host": {}, "f:db.names": {}, "f:db.password": {}, "f:db.port": {}, "f:db.user": {} }, "f:metadata": { "f:labels": { ".": {}, "f:global-hub.open-cluster-management.io/managed-by": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"fee3c3ed-8e28-427b-b21a-b99eea8d4c79\"}": {} } }, "f:type": {} } } ] }, "data": { "db.ca_cert": "", "db.host": "bG9jYWxob3N0", "db.names": "WyJ0ZXN0MyIsICJ0ZXN0XzQiXQ==", "db.password": "MzMwNzhjZDc5Mjcz", "db.port": "NTEzODM=", "db.user": "dGVzdF91c2VyMg==" }, "type": "Opaque" } 2025-08-18T00:31:28.907Z INFO storage/postgres_statefulset.go:65 the postgres customized config: •2025-08-18T00:31:28.937Z ERROR storage/storage_reconciler.go:214 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:214 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:220 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:28.937Z ERROR controller/controller.go:316 Reconciler error {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"multicluster-global-hub-storage","namespace":"namespace-q4h6hv"}, "namespace": "namespace-q4h6hv", "name": "multicluster-global-hub-storage", "reconcileID": "8c496bdc-ba91-43dd-89f8-6cce1d875c09", "error": "storage not ready, Error: failed to create/update postgres objects: services \"multicluster-global-hub-postgresql\" is forbidden: unable to create new content in namespace namespace-q4h6hv because it is being terminated"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:29.107Z ERROR storage/storage_reconciler.go:214 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:214 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:220 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:29.107Z ERROR controller/controller.go:316 Reconciler error {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-x5k4qv"}, "namespace": "namespace-x5k4qv", "name": "test-mgh", "reconcileID": "1a9727a4-6682-4198-81a9-9cb98682749c", "error": "storage not ready, Error: subscriptions.operators.coreos.com \"crunchy-postgres-operator\" already exists"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:29.215Z INFO storage/postgres_crunchy.go:98 waiting the postgres connection credential to be ready...messagepostgres guest user secret postgres-pguser-guest is nil 2025-08-18T00:31:29.219Z INFO storage/postgres_crunchy.go:91 waiting the postgres cluster to be ready...messagepostgresclusters.postgres-operator.crunchydata.com "postgres" already exists •2025-08-18T00:31:29.305Z INFO storage/postgres_statefulset.go:65 the postgres customized config: wal_level = logical max_wal_size = 2GB "ssl = on\nssl_cert_file = '/opt/app-root/src/certs/tls.crt' # server certificate\nssl_key_file = '/opt/app-root/src/certs/tls.key' # server private key\nssl_min_protocol_version = TLSv1.3\nwal_level = logical\nmax_wal_size = 2GB\n" •2025-08-18T00:31:29.434Z INFO controller/controller.go:175 Starting EventSource {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:31:29.434Z INFO controller/controller.go:175 Starting EventSource {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Deployment"} 2025-08-18T00:31:29.434Z INFO controller/controller.go:175 Starting EventSource {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Secret"} 2025-08-18T00:31:29.434Z INFO controller/controller.go:175 Starting EventSource {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Service"} 2025-08-18T00:31:29.434Z INFO controller/controller.go:175 Starting EventSource {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ServiceAccount"} 2025-08-18T00:31:29.434Z INFO controller/controller.go:183 Starting Controller {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:29.535Z INFO controller/controller.go:217 Starting workers {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:31:30.871Z INFO config/transport_config.go:233 set the inventory clientCA - key: inventory-api-client-ca-certs 2025-08-18T00:31:30.871Z INFO config/transport_config.go:237 set the inventory clientCA - cert: inventory-api-client-ca-certs 2025-08-18T00:31:30.912Z ERROR inventory/inventory_reconciler.go:152 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:152 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:276 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:30.913Z ERROR controller/controller.go:316 Reconciler error {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-bh2rcl"}, "namespace": "namespace-bh2rcl", "name": "test-mgh", "reconcileID": "d3a8def6-54b5-4b09-a304-f34bef7028da", "error": "failed to create/update inventory objects: serviceaccounts \"inventory-api\" is forbidden: unable to create new content in namespace namespace-bh2rcl because it is being terminated"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:31:30.936Z INFO manager/manager_reconciler.go:100 start manager controller 2025-08-18T00:31:30.936Z INFO manager/manager_reconciler.go:128 inited manager controller •2025-08-18T00:31:30.936Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:31:30.936Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Deployment"} 2025-08-18T00:31:30.936Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Service"} 2025-08-18T00:31:30.936Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ServiceAccount"} 2025-08-18T00:31:30.936Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ClusterRole"} 2025-08-18T00:31:30.936Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ClusterRoleBinding"} 2025-08-18T00:31:30.936Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Role"} 2025-08-18T00:31:30.936Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.RoleBinding"} 2025-08-18T00:31:30.936Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Route"} 2025-08-18T00:31:30.936Z INFO controller/controller.go:175 Starting EventSource {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha1.ClusterManagementAddOn"} 2025-08-18T00:31:30.936Z INFO controller/controller.go:183 Starting Controller {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:31.045Z INFO controller/controller.go:217 Starting workers {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:31:31.045Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod /go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:31.045Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-nnfsdx"}, "namespace": "namespace-nnfsdx", "name": "test-mgh", "reconcileID": "d5f3a46c-3e78-409f-9e8a-bd495127ebde", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 999 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x37268c8, 0xc001cfb620}, {0x2b54860, 0x533b0f0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x2b54860?, 0x533b0f0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod({0x37268c8, 0xc001cfb620}, {0x0, 0x0}, {0xc000ffee10, 0x10}, {0x318bd30?, 0x3?})\n\t/go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 +0xbd\ngithub.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile(0xc0017e3f20, {0x37268c8, 0xc001cfb620}, {{{0x0?, 0x312b712?}, {0x5?, 0x100?}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 +0x10aa\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001cfb590?, {0x37268c8?, 0xc001cfb620?}, {{{0xc000ffee10?, 0x0?}, {0xc000ffee08?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x3747160, {0x3726900, 0xc0008891d0}, {{{0xc000ffee10, 0x10}, {0xc000ffee08, 0x8}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x3747160, {0x3726900, 0xc0008891d0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 873\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod /go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:31.045Z ERROR controller/controller.go:316 Reconciler error {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-nnfsdx"}, "namespace": "namespace-nnfsdx", "name": "test-mgh", "reconcileID": "d5f3a46c-3e78-409f-9e8a-bd495127ebde", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:31.116Z INFO storage/postgres_crunchy.go:91 waiting the postgres cluster to be ready...messagepostgresclusters.postgres-operator.crunchydata.com "postgres" is forbidden: unable to create new content in namespace namespace-p6hpzt because it is being terminated 2025-08-18T00:31:31.218Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:348 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:31:31.270Z INFO KubeAPIWarningLogger log/warning_handler.go:65 metadata.finalizers: "fz": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers 2025-08-18T00:31:31.299Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:348 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:31.340Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:348 github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers.init.func5.4 /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/manager_test.go:139 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/node.go:475 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/suite.go:894 2025-08-18T00:31:31.405Z INFO manager/manager_reconciler.go:362 removing the migration resources •2025-08-18T00:31:31.630Z ERROR controller_certificates certificates/certificates.go:134 Failed to create secret {"name": "inventory-api-client-ca-certs", "error": "secrets \"inventory-api-client-ca-certs\" is forbidden: unable to create new content in namespace namespace-nnfsdx because it is being terminated"} github.com/stolostron/multicluster-global-hub/operator/pkg/certificates.createCASecret /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/certificates/certificates.go:134 github.com/stolostron/multicluster-global-hub/operator/pkg/certificates.CreateInventoryCerts /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/certificates/certificates.go:70 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:164 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:31.635Z ERROR controller/controller.go:316 Reconciler error {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-nnfsdx"}, "namespace": "namespace-nnfsdx", "name": "test-mgh", "reconcileID": "04609440-c403-4c04-9208-506ee32b3616", "error": "secrets \"inventory-api-client-ca-certs\" is forbidden: unable to create new content in namespace namespace-nnfsdx because it is being terminated"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 null •{ "components": { "inventory-api": { "name": "inventory-api", "kind": "Deployment", "type": "Available", "status": "False", "lastTransitionTime": "2025-08-18T00:31:31Z", "reason": "ReconcileError", "message": "secrets \"inventory-api-client-ca-certs\" is forbidden: unable to create new content in namespace namespace-nnfsdx because it is being terminated" }, "multicluster-global-hub-manager": { "name": "multicluster-global-hub-manager", "kind": "Deployment", "type": "Available", "status": "False", "lastTransitionTime": "2025-08-18T00:31:31Z", "reason": "MinimumReplicasUnavailable", "message": "Component multicluster-global-hub-manager has been deployed but is not ready" } }, "phase": "" } •2025-08-18T00:31:32.487Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:296 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.487Z ERROR controller/controller.go:316 Reconciler error {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-pbbkr9"}, "namespace": "namespace-pbbkr9", "name": "test-mgh", "reconcileID": "b428914b-8db8-4794-81bf-8388be8a7fb6", "error": "failed to get the inventory client cert and key: Secret \"inventory-api-guest-certs\" not found"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.488Z INFO inventory/spicedb_reconciler.go:69 start spiceDB controller 2025-08-18T00:31:32.488Z INFO controller/controller.go:175 Starting EventSource {"controller": "spicedb-reconciler", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:31:32.488Z INFO controller/controller.go:175 Starting EventSource {"controller": "spicedb-reconciler", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Deployment"} 2025-08-18T00:31:32.489Z INFO controller/controller.go:183 Starting Controller {"controller": "spicedb-reconciler", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:32.489Z INFO controller/controller.go:217 Starting workers {"controller": "spicedb-reconciler", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:31:32.492Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod /go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.492Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-pbbkr9"}, "namespace": "namespace-pbbkr9", "name": "test-mgh", "reconcileID": "102816d0-91d7-4c1b-a695-6244f16411b5", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 999 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x37268c8, 0xc0024a7c50}, {0x2b54860, 0x533b0f0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x2b54860?, 0x533b0f0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod({0x37268c8, 0xc0024a7c50}, {0x0, 0x0}, {0xc0011ef690, 0x10}, {0x318bd30?, 0x4?})\n\t/go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 +0xbd\ngithub.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile(0xc0017e3f20, {0x37268c8, 0xc0024a7c50}, {{{0x0?, 0x312b712?}, {0x5?, 0x100?}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 +0x10aa\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc0024a7bc0?, {0x37268c8?, 0xc0024a7c50?}, {{{0xc0011ef690?, 0x0?}, {0xc0011ef688?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x3747160, {0x3726900, 0xc0008891d0}, {{{0xc0011ef690, 0x10}, {0xc0011ef688, 0x8}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x3747160, {0x3726900, 0xc0008891d0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 873\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod /go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.492Z ERROR controller/controller.go:316 Reconciler error {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-pbbkr9"}, "namespace": "namespace-pbbkr9", "name": "test-mgh", "reconcileID": "102816d0-91d7-4c1b-a695-6244f16411b5", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.503Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:296 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.503Z ERROR controller/controller.go:316 Reconciler error {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-pbbkr9"}, "namespace": "namespace-pbbkr9", "name": "test-mgh", "reconcileID": "ad72d096-0fc9-4d07-ae08-ec10490f010e", "error": "failed to get the inventory client cert and key: Secret \"inventory-api-guest-certs\" not found"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.524Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:296 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.524Z ERROR controller/controller.go:316 Reconciler error {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-pbbkr9"}, "namespace": "namespace-pbbkr9", "name": "test-mgh", "reconcileID": "46015290-59bf-4320-a266-8e19709968b7", "error": "failed to get the inventory client cert and key: Secret \"inventory-api-guest-certs\" not found"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.541Z INFO controller/controller.go:175 Starting EventSource {"controller": "spicedb-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:31:32.542Z INFO controller/controller.go:175 Starting EventSource {"controller": "spicedb-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Secret"} 2025-08-18T00:31:32.542Z INFO controller/controller.go:175 Starting EventSource {"controller": "spicedb-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha1.SpiceDBCluster"} 2025-08-18T00:31:32.542Z INFO controller/controller.go:183 Starting Controller {"controller": "spicedb-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:32.566Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:296 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.566Z ERROR controller/controller.go:316 Reconciler error {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-pbbkr9"}, "namespace": "namespace-pbbkr9", "name": "test-mgh", "reconcileID": "4612845f-4b10-40c5-91df-9a987c0dac14", "error": "failed to get the inventory client cert and key: Secret \"inventory-api-guest-certs\" not found"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.643Z INFO controller/controller.go:217 Starting workers {"controller": "spicedb-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:31:32.647Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:296 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.647Z ERROR controller/controller.go:316 Reconciler error {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-pbbkr9"}, "namespace": "namespace-pbbkr9", "name": "test-mgh", "reconcileID": "b278534e-b74e-4c59-83ed-557c86693317", "error": "failed to get the inventory client cert and key: Secret \"inventory-api-guest-certs\" not found"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.650Z INFO inventory/spicedb_controller.go:341 spicedb cluster is created spicedb 2025-08-18T00:31:32.650Z ERROR controller/controller.go:316 Reconciler error {"controller": "spicedb-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-pbbkr9"}, "namespace": "namespace-pbbkr9", "name": "test-mgh", "reconcileID": "4554fd76-07da-4cb2-a5ac-086716381560", "error": "failed to create spicedb cluster: resource name may not be empty"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:31:32.717Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:31:32.717Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Secret"} 2025-08-18T00:31:32.717Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:31:32.717Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Deployment"} 2025-08-18T00:31:32.717Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Service"} 2025-08-18T00:31:32.717Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ServiceAccount"} 2025-08-18T00:31:32.722Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ClusterRole"} 2025-08-18T00:31:32.722Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ClusterRoleBinding"} 2025-08-18T00:31:32.722Z INFO controller/controller.go:175 Starting EventSource {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Route"} 2025-08-18T00:31:32.722Z INFO controller/controller.go:183 Starting Controller {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:32.723Z INFO controller/controller.go:217 Starting workers {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:31:32.785Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:348 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.815Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod /go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.815Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-pbbkr9"}, "namespace": "namespace-pbbkr9", "name": "test-mgh", "reconcileID": "a73fee45-eaef-41d2-a470-74ded08174d8", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 999 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x37268c8, 0xc002795dd0}, {0x2b54860, 0x533b0f0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x2b54860?, 0x533b0f0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod({0x37268c8, 0xc002795dd0}, {0x0, 0x0}, {0xc000defcd0, 0x10}, {0x318bd30?, 0xffffffffffffffff?})\n\t/go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 +0xbd\ngithub.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile(0xc0017e3f20, {0x37268c8, 0xc002795dd0}, {{{0x0?, 0x312b712?}, {0x5?, 0x100?}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 +0x10aa\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc002795d40?, {0x37268c8?, 0xc002795dd0?}, {{{0xc0011ef690?, 0x0?}, {0xc0011ef688?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x3747160, {0x3726900, 0xc0008891d0}, {{{0xc0011ef690, 0x10}, {0xc0011ef688, 0x8}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x3747160, {0x3726900, 0xc0008891d0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 873\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod /go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.815Z ERROR controller/controller.go:316 Reconciler error {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-pbbkr9"}, "namespace": "namespace-pbbkr9", "name": "test-mgh", "reconcileID": "a73fee45-eaef-41d2-a470-74ded08174d8", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.870Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:348 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:32.961Z INFO utils/utils.go:163 creating configmap, namespace: namespace-n825t5, name: multicluster-global-hub-alerting 2025-08-18T00:31:32.963Z INFO utils/utils.go:193 creating secret, namespace: namespace-n825t5, name: multicluster-global-hub-grafana-config 2025-08-18T00:31:32.968Z ERROR grafana/grafana_reconciler.go:268 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:268 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:367 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:31:33.043Z INFO controller/controller.go:175 Starting EventSource {"controller": "AddonsController", "controllerGroup": "addon.open-cluster-management.io", "controllerKind": "ClusterManagementAddOn", "source": "kind source: *v1alpha1.ClusterManagementAddOn"} 2025-08-18T00:31:33.043Z INFO controller/controller.go:175 Starting EventSource {"controller": "AddonsController", "controllerGroup": "addon.open-cluster-management.io", "controllerKind": "ClusterManagementAddOn", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:31:33.043Z INFO controller/controller.go:183 Starting Controller {"controller": "AddonsController", "controllerGroup": "addon.open-cluster-management.io", "controllerKind": "ClusterManagementAddOn"} 2025-08-18T00:31:33.043Z INFO controller/controller.go:217 Starting workers {"controller": "AddonsController", "controllerGroup": "addon.open-cluster-management.io", "controllerKind": "ClusterManagementAddOn", "worker count": 1} 2025-08-18T00:31:33.049Z INFO mceaddons/mce_addons_controller.go:60 start mce addons controller 2025-08-18T00:31:33.050Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /work-manager not found, skip reconcile 2025-08-18T00:31:33.050Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /cluster-proxy not found, skip reconcile 2025-08-18T00:31:33.050Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /managed-serviceaccount not found, skip reconcile 2025-08-18T00:31:33.116Z INFO storage/postgres_crunchy.go:91 waiting the postgres cluster to be ready...messagepostgresclusters.postgres-operator.crunchydata.com "postgres" is forbidden: unable to create new content in namespace namespace-p6hpzt because it is being terminated 2025-08-18T00:31:33.127Z ERROR manager/manager_reconciler.go:218 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:218 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:348 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:33.278Z INFO utils/utils.go:163 creating configmap, namespace: test-mgh, name: multicluster-global-hub-alerting 2025-08-18T00:31:33.281Z INFO utils/utils.go:193 creating secret, namespace: test-mgh, name: multicluster-global-hub-grafana-config 2025-08-18T00:31:33.285Z ERROR grafana/grafana_reconciler.go:268 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "multiclusterglobalhub" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:268 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:367 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:34.057Z INFO mceaddons/mce_addons_controller.go:168 Update ClusterManagementAddOn /work-manager 2025-08-18T00:31:34.062Z INFO mceaddons/mce_addons_controller.go:168 Update ClusterManagementAddOn /cluster-proxy 2025-08-18T00:31:34.064Z INFO mceaddons/mce_addons_controller.go:168 Update ClusterManagementAddOn mc-hosted/cluster-proxy 2025-08-18T00:31:34.065Z INFO mceaddons/mce_addons_controller.go:168 Update ClusterManagementAddOn /managed-serviceaccount 2025-08-18T00:31:34.067Z ERROR mceaddons/mce_addons_controller.go:171 Failed to update cma, err:Operation cannot be fulfilled on clustermanagementaddons.addon.open-cluster-management.io "cluster-proxy": the object has been modified; please apply your changes to the latest version and try again github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/mceaddons.(*MceAddonsController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/mceaddons/mce_addons_controller.go:171 github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers.init.func1.3.1 /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/mce_addons_test.go:146 reflect.Value.call /usr/local/go/src/reflect/value.go:584 reflect.Value.Call /usr/local/go/src/reflect/value.go:368 github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3 /go/pkg/mod/github.com/onsi/gomega@v1.38.0/internal/async_assertion.go:337 github.com/onsi/gomega/internal.(*AsyncAssertion).match /go/pkg/mod/github.com/onsi/gomega@v1.38.0/internal/async_assertion.go:410 github.com/onsi/gomega/internal.(*AsyncAssertion).Should /go/pkg/mod/github.com/onsi/gomega@v1.38.0/internal/async_assertion.go:145 github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers.init.func1.3 /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/mce_addons_test.go:179 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/node.go:475 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3 /go/pkg/mod/github.com/onsi/ginkgo/v2@v2.23.4/internal/suite.go:894 2025/08/18 00:31:34 [ERROR] Failed to reconcile addon, err:Operation cannot be fulfilled on clustermanagementaddons.addon.open-cluster-management.io "cluster-proxy": the object has been modified; please apply your changes to the latest version and try again •2025-08-18T00:31:34.187Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /work-manager not found, skip reconcile 2025-08-18T00:31:34.187Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /cluster-proxy not found, skip reconcile 2025-08-18T00:31:34.187Z INFO mceaddons/mce_addons_controller.go:151 ClusterManagementAddOn /managed-serviceaccount not found, skip reconcile { "BootstrapServer": "localhost:test", "StatusTopic": "gh-status", "SpecTopic": "gh-spec", "ClusterID": "localhost:test", "CACert": "Y2EuY3J0", "ClientCert": "Y2xpZW50LmNydA==", "ClientKey": "Y2xpZW50LmtleQ==", "CASecretName": "", "ClientSecretName": "" } •2025-08-18T00:31:34.484Z INFO config/transport_config.go:233 set the inventory clientCA - key: inventory-api-client-ca-certs 2025-08-18T00:31:34.484Z INFO config/transport_config.go:237 set the inventory clientCA - cert: inventory-api-client-ca-certs 2025-08-18T00:31:34.494Z ERROR controller/controller.go:316 Reconciler error {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"multiclusterglobalhub","namespace":"namespace-j2n8bl"}, "namespace": "namespace-j2n8bl", "name": "multiclusterglobalhub", "reconcileID": "251092cc-ee80-4065-ae7c-472733f9ed1e", "error": "the transport connection() must not be empty"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:34.564Z INFO utils/utils.go:163 creating configmap, namespace: namespace-2vgmsj, name: multicluster-global-hub-alerting 2025-08-18T00:31:34.568Z INFO utils/utils.go:193 creating secret, namespace: namespace-2vgmsj, name: multicluster-global-hub-grafana-config 2025-08-18T00:31:34.843Z INFO protocol/strimzi_kafka_controller.go:173 start kafka controller 2025-08-18T00:31:34.843Z INFO protocol/strimzi_kafka_controller.go:194 kafka controller is started 2025-08-18T00:31:34.843Z INFO controller/controller.go:175 Starting EventSource {"controller": "strimzi_controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:31:34.843Z INFO controller/controller.go:175 Starting EventSource {"controller": "strimzi_controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1beta2.Kafka"} 2025-08-18T00:31:34.843Z INFO controller/controller.go:175 Starting EventSource {"controller": "strimzi_controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1beta2.KafkaUser"} 2025-08-18T00:31:34.843Z INFO controller/controller.go:175 Starting EventSource {"controller": "strimzi_controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1beta2.KafkaTopic"} 2025-08-18T00:31:34.843Z INFO controller/controller.go:183 Starting Controller {"controller": "strimzi_controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:34.945Z INFO controller/controller.go:217 Starting workers {"controller": "strimzi_controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:31:34.977Z INFO protocol/strimzi_transporter.go:685 kafka cluster is ready 2025-08-18T00:31:34.977Z INFO config/transport_config.go:255 set the ca - client key: kafka-clients-ca 2025-08-18T00:31:34.977Z INFO config/transport_config.go:271 set the ca - client cert: kafka-clients-ca-cert •2025-08-18T00:31:35.027Z ERROR runtime/runtime.go:142 Observed a panic {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-2vgmsj"}, "namespace": "namespace-2vgmsj", "name": "test-mgh", "reconcileID": "34e59be8-2605-4532-aa1d-37f0e4752798", "panic": "runtime error: invalid memory address or nil pointer dereference", "panicGoValue": "\"invalid memory address or nil pointer dereference\"", "stacktrace": "goroutine 999 [running]:\nk8s.io/apimachinery/pkg/util/runtime.logPanic({0x37268c8, 0xc001d577a0}, {0x2b54860, 0x533b0f0})\n\t/go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:132 +0xbc\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 +0x112\npanic({0x2b54860?, 0x533b0f0?})\n\t/usr/local/go/src/runtime/panic.go:792 +0x132\ngithub.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod({0x37268c8, 0xc001d577a0}, {0x0, 0x0}, {0xc002dbe5a0, 0x10}, {0x318bd30?, 0x3?})\n\t/go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 +0xbd\ngithub.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile(0xc0017e3f20, {0x37268c8, 0xc001d577a0}, {{{0x0?, 0x312b712?}, {0x5?, 0x100?}}})\n\t/go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 +0x10aa\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile(0xc001d57710?, {0x37268c8?, 0xc001d577a0?}, {{{0xc002dbe5a0?, 0x0?}, {0xc002dbe590?, 0x0?}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 +0xbf\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler(0x3747160, {0x3726900, 0xc0008891d0}, {{{0xc002dbe5a0, 0x10}, {0xc002dbe590, 0x8}}})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 +0x3a5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem(0x3747160, {0x3726900, 0xc0008891d0})\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 +0x20d\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2()\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 +0x85\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2 in goroutine 873\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:220 +0x48d\n"} k8s.io/apimachinery/pkg/util/runtime.logPanic /go/pkg/mod/k8s.io/apimachinery@v0.33.2/pkg/util/runtime/runtime.go:142 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:105 runtime.gopanic /usr/local/go/src/runtime/panic.go:792 runtime.panicmem /usr/local/go/src/runtime/panic.go:262 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:925 github.com/stolostron/multicluster-global-hub/pkg/utils.RestartPod /go/src/github.com/stolostron/multicluster-global-hub/pkg/utils/utils.go:109 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager.(*ManagerReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/manager/manager_reconciler.go:275 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:35.028Z ERROR controller/controller.go:316 Reconciler error {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-2vgmsj"}, "namespace": "namespace-2vgmsj", "name": "test-mgh", "reconcileID": "34e59be8-2605-4532-aa1d-37f0e4752798", "error": "panic: runtime error: invalid memory address or nil pointer dereference [recovered]"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:35.116Z INFO storage/postgres_crunchy.go:91 waiting the postgres cluster to be ready...messagepostgresclusters.postgres-operator.crunchydata.com "postgres" is forbidden: unable to create new content in namespace namespace-p6hpzt because it is being terminated 2025-08-18T00:31:35.128Z INFO protocol/strimzi_transporter.go:685 kafka cluster is ready 2025-08-18T00:31:35.128Z INFO config/transport_config.go:255 set the ca - client key: kafka-clients-ca 2025-08-18T00:31:35.128Z INFO config/transport_config.go:271 set the ca - client cert: kafka-clients-ca-cert 2025-08-18T00:31:35.202Z INFO protocol/strimzi_transporter.go:685 kafka cluster is ready 2025-08-18T00:31:35.202Z INFO config/transport_config.go:255 set the ca - client key: kafka-clients-ca 2025-08-18T00:31:35.202Z INFO config/transport_config.go:271 set the ca - client cert: kafka-clients-ca-cert 2025-08-18T00:31:35.244Z INFO protocol/strimzi_transporter.go:369 create the kafakUser: hub1-kafka-user 2025-08-18T00:31:35.279Z INFO protocol/strimzi_transporter.go:685 kafka cluster is ready 2025-08-18T00:31:35.279Z INFO config/transport_config.go:255 set the ca - client key: kafka-clients-ca 2025-08-18T00:31:35.279Z INFO config/transport_config.go:271 set the ca - client cert: kafka-clients-ca-cert 2025-08-18T00:31:35.279Z ERROR protocol/strimzi_kafka_controller.go:99 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "test-mgh" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/transporter/protocol.(*KafkaController).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/transporter/protocol/strimzi_kafka_controller.go:99 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/transporter/protocol.(*KafkaController).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/transporter/protocol/strimzi_kafka_controller.go:136 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:35.281Z ERROR grafana/grafana_reconciler.go:268 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "test-mgh" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:268 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana.(*GrafanaReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/grafana/grafana_reconciler.go:367 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:31:35.283Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:31:35.283Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:31:35.283Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "strimzi_controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:35.283Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "AddonsController", "controllerGroup": "addon.open-cluster-management.io", "controllerKind": "ClusterManagementAddOn"} 2025-08-18T00:31:35.283Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:35.283Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "spicedb-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:35.283Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "spicedb-reconciler", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:35.283Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:35.283Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:35.283Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:31:35.283Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:35.283Z ERROR storage/storage_reconciler.go:214 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "test-mgh" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:214 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage.(*StorageReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/storage/storage_reconciler.go:220 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:35.283Z ERROR controller/controller.go:316 Reconciler error {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-x5k4qv"}, "namespace": "namespace-x5k4qv", "name": "test-mgh", "reconcileID": "086c3970-5ff3-45f1-84f4-bdac3b5f6917", "error": "storage not ready, Error: context canceled"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:35.283Z INFO controller/controller.go:239 All workers finished {"controller": "spicedb-controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:35.283Z INFO controller/controller.go:239 All workers finished {"controller": "strimzi_controller", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:35.283Z INFO controller/controller.go:239 All workers finished {"controller": "AddonsController", "controllerGroup": "addon.open-cluster-management.io", "controllerKind": "ClusterManagementAddOn"} 2025-08-18T00:31:35.283Z INFO controller/controller.go:239 All workers finished {"controller": "grafanaController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:35.283Z INFO controller/controller.go:239 All workers finished {"controller": "manager", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:35.283Z INFO controller/controller.go:239 All workers finished {"controller": "spicedb-reconciler", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:35.283Z INFO controller/controller.go:239 All workers finished {"controller": "postgresUserController", "controllerGroup": "", "controllerKind": "ConfigMap"} 2025-08-18T00:31:35.283Z INFO controller/controller.go:239 All workers finished {"controller": "storageController", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} waiting for server to shut down...2025-08-18 00:31:35.286 UTC [26418] LOG: received fast shutdown request .2025-08-18 00:31:35.286 UTC [26418] LOG: aborting any active transactions 2025-08-18 00:31:35.287 UTC [26418] LOG: background worker "logical replication launcher" (PID 26424) exited with exit code 1 2025-08-18 00:31:35.291 UTC [26419] LOG: shutting down 2025-08-18 00:31:35.292 UTC [26419] LOG: checkpoint starting: shutdown immediate 2025-08-18 00:31:35.375 UTC [26419] LOG: checkpoint complete: wrote 4691 buffers (28.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.070 s, sync=0.014 s, total=0.084 s; sync files=1680, longest=0.001 s, average=0.001 s; distance=22404 kB, estimate=22404 kB; lsn=0/2ABFBD0, redo lsn=0/2ABFBD0 2025-08-18 00:31:35.396 UTC [26418] LOG: database system is shut down done server stopped 2025-08-18T00:31:35.756Z ERROR controller_certificates certificates/certificates.go:134 Failed to create secret {"name": "inventory-api-client-ca-certs", "error": "Post \"https://127.0.0.1:41413/api/v1/namespaces/namespace-2vgmsj/secrets\": dial tcp 127.0.0.1:41413: connect: connection refused"} github.com/stolostron/multicluster-global-hub/operator/pkg/certificates.createCASecret /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/certificates/certificates.go:134 github.com/stolostron/multicluster-global-hub/operator/pkg/certificates.CreateInventoryCerts /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/certificates/certificates.go:70 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:164 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:35.757Z ERROR inventory/inventory_reconciler.go:152 failed to update mgh status, err:MulticlusterGlobalHub.operator.open-cluster-management.io "test-mgh" not found github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile.func1 /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:152 github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory.(*InventoryReconciler).Reconcile /go/src/github.com/stolostron/multicluster-global-hub/operator/pkg/controllers/inventory/inventory_reconciler.go:165 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:116 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:303 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:35.757Z ERROR controller/controller.go:316 Reconciler error {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"test-mgh","namespace":"namespace-nnfsdx"}, "namespace": "namespace-nnfsdx", "name": "test-mgh", "reconcileID": "b998aa36-1f38-49c7-a43a-601f9f56e80a", "error": "Post \"https://127.0.0.1:41413/api/v1/namespaces/namespace-2vgmsj/secrets\": dial tcp 127.0.0.1:41413: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:35.757Z INFO controller/controller.go:239 All workers finished {"controller": "inventory", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:35.757Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:31:35.757423 25846 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1beta2.KafkaNodePool" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:35.757538 25846 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1alpha1.ClusterServiceVersion" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:31:35.757Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:31:35.757Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:31:35.757Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager 2025-08-18T00:31:35.757Z ERROR manager/internal.go:512 error received after stop sequence was engaged {"error": "leader election lost"} sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/manager/internal.go:512 E0818 00:31:35.758158 25846 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://127.0.0.1:41413/api/v1/namespaces/default/events\": dial tcp 127.0.0.1:41413: connect: connection refused" event="&Event{ObjectMeta:{549a8919.open-cluster-management.io.185cb51e1ff91986 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Lease,Namespace:default,Name:549a8919.open-cluster-management.io,UID:f11581a9-b290-4b3e-af02-2e1c646d15f8,APIVersion:coordination.k8s.io/v1,ResourceVersion:364,FieldPath:,},Reason:LeaderElection,Message:test-integration_179a0aa5-f661-4ee8-95e6-6348921a2483 stopped leading,Source:EventSource{Component:test-integration_179a0aa5-f661-4ee8-95e6-6348921a2483,Host:,},FirstTimestamp:2025-08-18 00:31:35.757715846 +0000 UTC m=+22.440702396,LastTimestamp:2025-08-18 00:31:35.757715846 +0000 UTC m=+22.440702396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:test-integration_179a0aa5-f661-4ee8-95e6-6348921a2483,ReportingInstance:,}" E0818 00:31:35.758215 25846 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{549a8919.open-cluster-management.io.185cb51e1ff91986 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Lease,Namespace:default,Name:549a8919.open-cluster-management.io,UID:f11581a9-b290-4b3e-af02-2e1c646d15f8,APIVersion:coordination.k8s.io/v1,ResourceVersion:364,FieldPath:,},Reason:LeaderElection,Message:test-integration_179a0aa5-f661-4ee8-95e6-6348921a2483 stopped leading,Source:EventSource{Component:test-integration_179a0aa5-f661-4ee8-95e6-6348921a2483,Host:,},FirstTimestamp:2025-08-18 00:31:35.757715846 +0000 UTC m=+22.440702396,LastTimestamp:2025-08-18 00:31:35.757715846 +0000 UTC m=+22.440702396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:test-integration_179a0aa5-f661-4ee8-95e6-6348921a2483,ReportingInstance:,}" Ran 15 of 15 Specs in 23.246 seconds SUCCESS! -- 15 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestControllers (23.25s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers 23.331s === RUN TestControllers Running Suite: Controller Integration Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/agent =============================================================================================================================================== Random Seed: 1755477073 Will run 10 of 10 specs 2025-08-18T00:31:23.775Z INFO addon/addon_manager.go:66 start addon manager controller 2025-08-18T00:31:23.781Z INFO addon/addon_manager.go:130 starting addon manager I0818 00:31:23.789403 25847 base_controller.go:34] Waiting for caches to sync for cma-managed-by-controller I0818 00:31:23.789452 25847 base_controller.go:34] Waiting for caches to sync for addon-deploy-controller I0818 00:31:23.789465 25847 base_controller.go:34] Waiting for caches to sync for addon-registration-controller I0818 00:31:23.789809 25847 base_controller.go:34] Waiting for caches to sync for CSRApprovingController I0818 00:31:23.789828 25847 base_controller.go:34] Waiting for caches to sync for CSRSignController 2025-08-18T00:31:23.782Z INFO addon/addon_manager.go:76 inited GlobalHubAddonManager controller 2025-08-18T00:31:23.800Z INFO addon/default_agent_controller.go:71 start default agent controller 2025-08-18T00:31:23.800Z INFO addon/default_agent_controller.go:170 the default agent controller is started 2025-08-18T00:31:23.801Z INFO agent/local_agent_controller.go:48 start local agent controller 2025-08-18T00:31:23.809Z INFO controller/controller.go:175 Starting EventSource {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:31:23.809Z INFO controller/controller.go:175 Starting EventSource {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.ManagedCluster"} 2025-08-18T00:31:23.809Z INFO controller/controller.go:175 Starting EventSource {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha1.ManagedClusterAddOn"} 2025-08-18T00:31:23.809Z INFO controller/controller.go:175 Starting EventSource {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1alpha1.ClusterManagementAddOn"} 2025-08-18T00:31:23.809Z INFO controller/controller.go:175 Starting EventSource {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "source": "kind source: *v1.Secret"} 2025-08-18T00:31:23.809Z INFO controller/controller.go:183 Starting Controller {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:31:23.809Z INFO controller/controller.go:175 Starting EventSource {"controller": "local-agent-reconciler", "source": "kind source: *v1alpha4.MulticlusterGlobalHub"} 2025-08-18T00:31:23.809Z INFO controller/controller.go:175 Starting EventSource {"controller": "local-agent-reconciler", "source": "kind source: *v1.ManagedCluster"} 2025-08-18T00:31:23.809Z INFO controller/controller.go:175 Starting EventSource {"controller": "local-agent-reconciler", "source": "kind source: *v1.Deployment"} 2025-08-18T00:31:23.809Z INFO controller/controller.go:175 Starting EventSource {"controller": "local-agent-reconciler", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:31:23.809Z INFO controller/controller.go:175 Starting EventSource {"controller": "local-agent-reconciler", "source": "kind source: *v1.ServiceAccount"} 2025-08-18T00:31:23.809Z INFO controller/controller.go:175 Starting EventSource {"controller": "local-agent-reconciler", "source": "kind source: *v1.ClusterRole"} 2025-08-18T00:31:23.809Z INFO controller/controller.go:175 Starting EventSource {"controller": "local-agent-reconciler", "source": "kind source: *v1.ClusterRoleBinding"} 2025-08-18T00:31:23.809Z INFO controller/controller.go:183 Starting Controller {"controller": "local-agent-reconciler"} I0818 00:31:23.889657 25847 base_controller.go:40] Caches are synced for addon-registration-controller I0818 00:31:23.889705 25847 base_controller.go:78] Starting #1 worker of addon-registration-controller controller ... I0818 00:31:23.889736 25847 base_controller.go:40] Caches are synced for cma-managed-by-controller I0818 00:31:23.889747 25847 base_controller.go:78] Starting #1 worker of cma-managed-by-controller controller ... I0818 00:31:23.890656 25847 base_controller.go:40] Caches are synced for CSRSignController I0818 00:31:23.890679 25847 base_controller.go:78] Starting #1 worker of CSRSignController controller ... I0818 00:31:23.890711 25847 base_controller.go:40] Caches are synced for CSRApprovingController I0818 00:31:23.890718 25847 base_controller.go:78] Starting #1 worker of CSRApprovingController controller ... 2025-08-18T00:31:23.912Z INFO addon/default_agent_controller.go:457 triggering all the addons/clusters: %d0 2025-08-18T00:31:23.927Z INFO controller/controller.go:217 Starting workers {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "worker count": 1} 2025-08-18T00:31:23.927Z INFO addon/default_agent_controller.go:457 triggering all the addons/clusters: %d1 2025-08-18T00:31:23.927Z INFO controller/controller.go:217 Starting workers {"controller": "local-agent-reconciler", "worker count": 1} 2025-08-18T00:31:23.927Z INFO addon/default_agent_controller.go:248 not found the cluster test-mgh, the controller might triggered by multiclusterglboalhub 2025-08-18T00:31:23.927Z INFO addon/default_agent_controller.go:265 cluster(hub-zdcbsl): isDetaching - false, hasDeployLabel - true 2025-08-18T00:31:23.927Z INFO addon/default_agent_controller.go:311 creating resources and addon {"cluster": "hub-zdcbsl", "addon": "multicluster-global-hub-controller"} 2025-08-18T00:31:23.934Z INFO KubeAPIWarningLogger log/warning_handler.go:65 unknown field "status.healthCheck" 2025-08-18T00:31:23.934Z INFO agent/local_agent_controller.go:304 create transport secret transport-config-local-cluster for local agent 2025-08-18T00:31:23.945Z INFO addon/default_agent_controller.go:265 cluster(hub-hosting-bjsss9): isDetaching - false, hasDeployLabel - false 2025-08-18T00:31:23.945Z INFO addon/default_agent_controller.go:267 deleting resources and addon {"cluster": "hub-hosting-bjsss9"} 2025-08-18T00:31:23.965Z INFO addon/default_agent_controller.go:265 cluster(hub-hosting-bjsss9): isDetaching - false, hasDeployLabel - false 2025-08-18T00:31:23.965Z INFO addon/default_agent_controller.go:267 deleting resources and addon {"cluster": "hub-hosting-bjsss9"} I0818 00:31:24.890475 25847 base_controller.go:40] Caches are synced for addon-deploy-controller I0818 00:31:24.890510 25847 base_controller.go:78] Starting #1 worker of addon-deploy-controller controller ... ------------------------------ • [FAILED] [60.055 seconds] deploy hosted addon [It] Should create hosted addon in OCP /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/agent/clustermanagementaddon_test.go:24 Timeline >> STEP: By preparing clusters @ 08/18/25 00:31:23.912 STEP: By checking the addon CR is is created in the cluster ns @ 08/18/25 00:31:23.964 STEP: By checking the agent manifestworks are created for the newly created managed cluster @ 08/18/25 00:31:23.966 [FAILED] in [It] - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/agent/clustermanagementaddon_test.go:68 @ 08/18/25 00:32:23.967 << Timeline [FAILED] Timed out after 60.001s. Unexpected error: <*errors.StatusError | 0xc0015e9ea0>: manifestworks.work.open-cluster-management.io "addon-multicluster-global-hub-controller-deploy-hosting-hub-zdcbsl-0" not found { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "manifestworks.work.open-cluster-management.io \"addon-multicluster-global-hub-controller-deploy-hosting-hub-zdcbsl-0\" not found", Reason: "NotFound", Details: { Name: "addon-multicluster-global-hub-controller-deploy-hosting-hub-zdcbsl-0", Group: "work.open-cluster-management.io", Kind: "manifestworks", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } occurred In [It] at: /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/agent/clustermanagementaddon_test.go:68 @ 08/18/25 00:32:23.967 ------------------------------ 2025-08-18T00:32:23.970Z INFO addon/default_agent_controller.go:265 cluster(hub-dds5f9): isDetaching - false, hasDeployLabel - true 2025-08-18T00:32:23.970Z INFO addon/default_agent_controller.go:311 creating resources and addon {"cluster": "hub-dds5f9", "addon": "multicluster-global-hub-controller"} 2025-08-18T00:32:24.023Z INFO certificates/csr.go:17 specify the clientName(CN: hub-dds5f9-kafka-user) for managed hub cluster(hub-dds5f9) 2025-08-18T00:32:24.023Z INFO addon/default_agent_controller.go:265 cluster(hub-hosting-jhk594): isDetaching - false, hasDeployLabel - false 2025-08-18T00:32:24.023Z INFO addon/default_agent_controller.go:267 deleting resources and addon {"cluster": "hub-hosting-jhk594"} 2025-08-18T00:32:24.023Z INFO addon/default_agent_controller.go:265 cluster(hub-dds5f9): isDetaching - false, hasDeployLabel - true I0818 00:32:24.025890 25847 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:32:24.026Z INFO certificates/csr.go:17 specify the clientName(CN: hub-dds5f9-kafka-user) for managed hub cluster(hub-dds5f9) 2025-08-18T00:32:24.027Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-dds5f9"} I0818 00:32:24.028404 25847 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:32:24.036Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-dds5f9"} 2025-08-18T00:32:24.039Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-dds5f9"} I0818 00:32:24.044455 25847 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" 2025-08-18T00:32:24.044Z INFO certificates/csr.go:17 specify the clientName(CN: hub-dds5f9-kafka-user) for managed hub cluster(hub-dds5f9) 2025-08-18T00:32:24.045Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-dds5f9"} I0818 00:32:24.047261 25847 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:32:24.049Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-dds5f9"} 2025-08-18T00:32:24.055Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-dds5f9"} 2025-08-18T00:32:24.060Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-dds5f9"} I0818 00:32:24.064478 25847 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" 2025-08-18T00:32:24.064Z INFO certificates/csr.go:17 specify the clientName(CN: hub-dds5f9-kafka-user) for managed hub cluster(hub-dds5f9) 2025-08-18T00:32:24.065Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-dds5f9"} I0818 00:32:24.066933 25847 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:32:24.068Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-dds5f9"} 2025-08-18T00:32:24.070Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-dds5f9"} 2025-08-18T00:32:24.073Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-dds5f9"} I0818 00:32:24.086323 25847 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" 2025-08-18T00:32:24.088Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-dds5f9"} 2025-08-18T00:32:24.091Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-dds5f9"} 2025-08-18T00:32:24.094Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-dds5f9"} 2025-08-18T00:32:24.097Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-dds5f9"} I0818 00:32:24.101146 25847 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" ••2025-08-18T00:32:24.258Z INFO agent/local_agent_controller.go:163 local cluster name changed from local-cluster to local-cluster-new 2025-08-18T00:32:24.273Z INFO agent/local_agent_controller.go:304 create transport secret transport-config-local-cluster-new for local agent •2025-08-18T00:32:25.258Z INFO addon/default_agent_controller.go:248 not found the cluster test-mgh, the controller might triggered by multiclusterglboalhub •2025-08-18T00:32:26.268Z INFO addon/default_agent_controller.go:265 cluster(hub-wjggzk): isDetaching - false, hasDeployLabel - false 2025-08-18T00:32:26.268Z INFO addon/default_agent_controller.go:267 deleting resources and addon {"cluster": "hub-wjggzk"} 2025-08-18T00:32:26.270Z INFO addon/default_agent_controller.go:265 cluster(hub-wjggzk): isDetaching - false, hasDeployLabel - false 2025-08-18T00:32:26.270Z INFO addon/default_agent_controller.go:267 deleting resources and addon {"cluster": "hub-wjggzk"} •2025-08-18T00:32:26.275Z INFO addon/default_agent_controller.go:265 cluster(hub-nrm85m): isDetaching - false, hasDeployLabel - true 2025-08-18T00:32:26.275Z INFO addon/default_agent_controller.go:311 creating resources and addon {"cluster": "hub-nrm85m", "addon": "multicluster-global-hub-controller"} 2025-08-18T00:32:26.328Z INFO certificates/csr.go:17 specify the clientName(CN: hub-nrm85m-kafka-user) for managed hub cluster(hub-nrm85m) 2025-08-18T00:32:26.328Z INFO addon/default_agent_controller.go:265 cluster(hub-nrm85m): isDetaching - false, hasDeployLabel - true I0818 00:32:26.331306 25847 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:32:26.331Z INFO certificates/csr.go:17 specify the clientName(CN: hub-nrm85m-kafka-user) for managed hub cluster(hub-nrm85m) I0818 00:32:26.334023 25847 warnings.go:110] "Warning: unknown field \"status.namespace\"" I0818 00:32:26.342413 25847 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" 2025-08-18T00:32:26.342Z INFO certificates/csr.go:17 specify the clientName(CN: hub-nrm85m-kafka-user) for managed hub cluster(hub-nrm85m) I0818 00:32:26.344915 25847 warnings.go:110] "Warning: unknown field \"status.namespace\"" I0818 00:32:26.350871 25847 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" •2025-08-18T00:32:26.537Z INFO addon/default_agent_controller.go:265 cluster(hub-t6772h): isDetaching - false, hasDeployLabel - true 2025-08-18T00:32:26.537Z INFO addon/default_agent_controller.go:311 creating resources and addon {"cluster": "hub-t6772h", "addon": "multicluster-global-hub-controller"} 2025-08-18T00:32:26.590Z INFO certificates/csr.go:17 specify the clientName(CN: hub-t6772h-kafka-user) for managed hub cluster(hub-t6772h) 2025-08-18T00:32:26.590Z INFO addon/default_agent_controller.go:265 cluster(hub-t6772h): isDetaching - false, hasDeployLabel - true I0818 00:32:26.593731 25847 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:32:26.593Z INFO certificates/csr.go:17 specify the clientName(CN: hub-t6772h-kafka-user) for managed hub cluster(hub-t6772h) 2025-08-18T00:32:26.594Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-t6772h"} I0818 00:32:26.596214 25847 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:32:26.602Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-t6772h"} I0818 00:32:26.607475 25847 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" 2025-08-18T00:32:26.607Z INFO certificates/csr.go:17 specify the clientName(CN: hub-t6772h-kafka-user) for managed hub cluster(hub-t6772h) 2025-08-18T00:32:26.608Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-t6772h"} I0818 00:32:26.609854 25847 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:32:26.612Z INFO addon/addon_agent_manifest_acm.go:41 installing ACM on managed hub {"cluster": "hub-t6772h"} I0818 00:32:26.617026 25847 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" •2025-08-18T00:32:26.806Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-n276tx): isDetaching - false, hasDeployLabel - false 2025-08-18T00:32:26.806Z INFO addon/default_agent_controller.go:267 deleting resources and addon {"cluster": "hub-ocp-mode-none-n276tx"} 2025-08-18T00:32:26.808Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-n276tx): isDetaching - false, hasDeployLabel - false 2025-08-18T00:32:26.808Z INFO addon/default_agent_controller.go:267 deleting resources and addon {"cluster": "hub-ocp-mode-none-n276tx"} 2025-08-18T00:32:26.813Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-no-condtion-gp954h): isDetaching - false, hasDeployLabel - true 2025-08-18T00:32:26.813Z INFO addon/default_agent_controller.go:311 creating resources and addon {"cluster": "hub-ocp-no-condtion-gp954h", "addon": "multicluster-global-hub-controller"} 2025-08-18T00:32:26.866Z INFO certificates/csr.go:17 specify the clientName(CN: hub-ocp-no-condtion-gp954h-kafka-user) for managed hub cluster(hub-ocp-no-condtion-gp954h) 2025-08-18T00:32:26.866Z INFO addon/default_agent_controller.go:265 cluster(local-cluster): isDetaching - false, hasDeployLabel - true 2025-08-18T00:32:26.866Z INFO addon/default_agent_controller.go:311 creating resources and addon {"cluster": "local-cluster", "addon": "multicluster-global-hub-controller"} 2025-08-18T00:32:26.868Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-64zwdl): isDetaching - false, hasDeployLabel - true 2025-08-18T00:32:26.868Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-64zwdl"}, "namespace": "", "name": "hub-ocp-mode-none-64zwdl", "reconcileID": "d0c63bce-6587-4c7b-9726-b39afb73ff35", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-64zwdl is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 I0818 00:32:26.868929 25847 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:32:26.869Z INFO certificates/csr.go:17 specify the clientName(CN: local-cluster-kafka-user) for managed hub cluster(local-cluster) I0818 00:32:26.871529 25847 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:32:26.871Z INFO certificates/csr.go:17 specify the clientName(CN: hub-ocp-no-condtion-gp954h-kafka-user) for managed hub cluster(hub-ocp-no-condtion-gp954h) I0818 00:32:26.873940 25847 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:32:26.873Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-64zwdl): isDetaching - false, hasDeployLabel - true 2025-08-18T00:32:26.874Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-64zwdl"}, "namespace": "", "name": "hub-ocp-mode-none-64zwdl", "reconcileID": "e8ad8a2f-cb06-48ab-9e4a-9cb0479481e9", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-64zwdl is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:32:26.874Z INFO certificates/csr.go:17 specify the clientName(CN: local-cluster-kafka-user) for managed hub cluster(local-cluster) I0818 00:32:26.876176 25847 warnings.go:110] "Warning: unknown field \"status.namespace\"" I0818 00:32:26.880286 25847 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" 2025-08-18T00:32:26.880Z INFO certificates/csr.go:17 specify the clientName(CN: hub-ocp-no-condtion-gp954h-kafka-user) for managed hub cluster(hub-ocp-no-condtion-gp954h) I0818 00:32:26.882881 25847 warnings.go:110] "Warning: unknown field \"status.namespace\"" 2025-08-18T00:32:26.884Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-64zwdl): isDetaching - false, hasDeployLabel - true 2025-08-18T00:32:26.884Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-64zwdl"}, "namespace": "", "name": "hub-ocp-mode-none-64zwdl", "reconcileID": "4409bc11-2791-4dd2-82db-17f6d67e4f73", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-64zwdl is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 I0818 00:32:26.892639 25847 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" 2025-08-18T00:32:26.892Z INFO certificates/csr.go:17 specify the clientName(CN: local-cluster-kafka-user) for managed hub cluster(local-cluster) I0818 00:32:26.895201 25847 warnings.go:110] "Warning: unknown field \"status.namespace\"" I0818 00:32:26.900606 25847 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" 2025-08-18T00:32:26.905Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-64zwdl): isDetaching - false, hasDeployLabel - true 2025-08-18T00:32:26.905Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-64zwdl"}, "namespace": "", "name": "hub-ocp-mode-none-64zwdl", "reconcileID": "acc99dd5-3b42-4d65-9193-1e9c16e09132", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-64zwdl is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 I0818 00:32:26.908111 25847 warnings.go:110] "Warning: unknown field \"status.healthCheck\"" 2025-08-18T00:32:26.946Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-64zwdl): isDetaching - false, hasDeployLabel - true 2025-08-18T00:32:26.946Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-64zwdl"}, "namespace": "", "name": "hub-ocp-mode-none-64zwdl", "reconcileID": "1d9a2a56-9d8d-4e31-add2-3ec80ed06eb2", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-64zwdl is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:32:27.026Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-64zwdl): isDetaching - false, hasDeployLabel - true 2025-08-18T00:32:27.026Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-64zwdl"}, "namespace": "", "name": "hub-ocp-mode-none-64zwdl", "reconcileID": "04d151e3-81b2-49c2-a548-ec28f22c185d", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-64zwdl is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:32:27.187Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-64zwdl): isDetaching - false, hasDeployLabel - true 2025-08-18T00:32:27.187Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-64zwdl"}, "namespace": "", "name": "hub-ocp-mode-none-64zwdl", "reconcileID": "3f30e79f-bf0d-474f-8885-f269fdde6482", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-64zwdl is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:32:27.508Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-64zwdl): isDetaching - false, hasDeployLabel - true 2025-08-18T00:32:27.508Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-64zwdl"}, "namespace": "", "name": "hub-ocp-mode-none-64zwdl", "reconcileID": "00d476a0-3e90-4e34-a643-53153920c422", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-64zwdl is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:32:28.149Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-64zwdl): isDetaching - false, hasDeployLabel - true 2025-08-18T00:32:28.149Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-64zwdl"}, "namespace": "", "name": "hub-ocp-mode-none-64zwdl", "reconcileID": "16e68cd3-1e15-48df-9a38-e140efedc84c", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-64zwdl is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:32:29.430Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-64zwdl): isDetaching - false, hasDeployLabel - true 2025-08-18T00:32:29.430Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-64zwdl"}, "namespace": "", "name": "hub-ocp-mode-none-64zwdl", "reconcileID": "105c573f-6c58-4dcd-8105-8d7441195afa", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-64zwdl is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:32:31.990Z INFO addon/default_agent_controller.go:265 cluster(hub-ocp-mode-none-64zwdl): isDetaching - false, hasDeployLabel - true 2025-08-18T00:32:31.990Z ERROR controller/controller.go:316 Reconciler error {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub", "MulticlusterGlobalHub": {"name":"hub-ocp-mode-none-64zwdl"}, "namespace": "", "name": "hub-ocp-mode-none-64zwdl", "reconcileID": "2c5a159c-1261-4aa6-ac1e-eb056b21c45b", "error": "failed to get import.open-cluster-management.io/hosting-cluster-name when addon in hub-ocp-mode-none-64zwdl is installed in hosted mode"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 ••2025-08-18T00:32:33.103Z INFO addon/default_agent_controller.go:248 not found the cluster test-mgh, the controller might triggered by multiclusterglboalhub I0818 00:32:33.103298 25847 base_controller.go:107] Shutting down CSRApprovingController ... I0818 00:32:33.103322 25847 base_controller.go:107] Shutting down addon-registration-controller ... 2025-08-18T00:32:33.103Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables I0818 00:32:33.103344 25847 base_controller.go:107] Shutting down CSRSignController ... I0818 00:32:33.103345 25847 base_controller.go:82] Shutting down worker of addon-registration-controller controller ... I0818 00:32:33.103352 25847 base_controller.go:82] Shutting down worker of cma-managed-by-controller controller ... 2025-08-18T00:32:33.103Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables I0818 00:32:33.103359 25847 base_controller.go:72] All addon-registration-controller workers have been terminated I0818 00:32:33.103341 25847 base_controller.go:107] Shutting down cma-managed-by-controller ... I0818 00:32:33.103382 25847 base_controller.go:82] Shutting down worker of CSRApprovingController controller ... I0818 00:32:33.103387 25847 base_controller.go:72] All cma-managed-by-controller workers have been terminated I0818 00:32:33.103391 25847 base_controller.go:72] All CSRApprovingController workers have been terminated I0818 00:32:33.103362 25847 base_controller.go:82] Shutting down worker of CSRSignController controller ... I0818 00:32:33.103400 25847 base_controller.go:72] All CSRSignController workers have been terminated 2025-08-18T00:32:33.103Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "local-agent-reconciler"} I0818 00:32:33.103396 25847 base_controller.go:82] Shutting down worker of addon-deploy-controller controller ... I0818 00:32:33.103373 25847 base_controller.go:107] Shutting down addon-deploy-controller ... 2025-08-18T00:32:33.103Z INFO controller/controller.go:239 All workers finished {"controller": "local-agent-reconciler"} 2025-08-18T00:32:33.103Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} I0818 00:32:33.103420 25847 base_controller.go:72] All addon-deploy-controller workers have been terminated 2025-08-18T00:32:33.103Z INFO controller/controller.go:239 All workers finished {"controller": "default-agent-ctrl", "controllerGroup": "operator.open-cluster-management.io", "controllerKind": "MulticlusterGlobalHub"} 2025-08-18T00:32:33.103Z INFO manager/internal.go:550 Stopping and waiting for caches 2025-08-18T00:32:33.103Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:32:33.103Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:32:33.103Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager Summarizing 1 Failure: [FAIL] deploy hosted addon [It] Should create hosted addon in OCP /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/agent/clustermanagementaddon_test.go:68 Ran 10 of 10 Specs in 80.778 seconds FAIL! -- 9 Passed | 1 Failed | 0 Pending | 0 Skipped --- FAIL: TestControllers (80.78s) FAIL FAIL github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/agent 80.904s === RUN TestControllers Running Suite: Standalone Agent Controller Integration Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/agent/standalone_agent ================================================================================================================================================================================= Random Seed: 1755477073 Will run 1 of 1 specs 2025-08-18T00:31:23.846Z INFO controller/controller.go:175 Starting EventSource {"controller": "standalone-agent-reconciler", "source": "kind source: *v1alpha1.MulticlusterGlobalHubAgent"} 2025-08-18T00:31:23.846Z INFO controller/controller.go:175 Starting EventSource {"controller": "standalone-agent-reconciler", "source": "kind source: *v1.Deployment"} 2025-08-18T00:31:23.846Z INFO controller/controller.go:175 Starting EventSource {"controller": "standalone-agent-reconciler", "source": "kind source: *v1.ConfigMap"} 2025-08-18T00:31:23.846Z INFO controller/controller.go:175 Starting EventSource {"controller": "standalone-agent-reconciler", "source": "kind source: *v1.ServiceAccount"} 2025-08-18T00:31:23.846Z INFO controller/controller.go:175 Starting EventSource {"controller": "standalone-agent-reconciler", "source": "kind source: *v1.ClusterRole"} 2025-08-18T00:31:23.846Z INFO controller/controller.go:175 Starting EventSource {"controller": "standalone-agent-reconciler", "source": "kind source: *v1.ClusterRoleBinding"} 2025-08-18T00:31:23.846Z INFO controller/controller.go:183 Starting Controller {"controller": "standalone-agent-reconciler"} 2025-08-18T00:31:23.952Z INFO controller/controller.go:217 Starting workers {"controller": "standalone-agent-reconciler", "worker count": 1} 2025-08-18T00:31:24.059Z ERROR controller/controller.go:316 Reconciler error {"controller": "standalone-agent-reconciler", "namespace": "default", "name": "multiclusterglobalhubagent", "reconcileID": "e29ebfc7-c04e-438c-9f53-1bdfc818291d", "error": "Infrastructure.config.openshift.io \"cluster\" not found"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:24.064Z ERROR controller/controller.go:316 Reconciler error {"controller": "standalone-agent-reconciler", "namespace": "default", "name": "multiclusterglobalhubagent", "reconcileID": "151c325f-0d4b-44df-952a-148e565e2ed5", "error": "Infrastructure.config.openshift.io \"cluster\" not found"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:24.074Z ERROR controller/controller.go:316 Reconciler error {"controller": "standalone-agent-reconciler", "namespace": "default", "name": "multiclusterglobalhubagent", "reconcileID": "e9b30ad4-f353-438a-aab6-7bdec8f6fd91", "error": "Infrastructure.config.openshift.io \"cluster\" not found"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:24.094Z ERROR controller/controller.go:316 Reconciler error {"controller": "standalone-agent-reconciler", "namespace": "default", "name": "multiclusterglobalhubagent", "reconcileID": "f91ce9fa-859b-44b4-b94a-966dcfdf5344", "error": "Infrastructure.config.openshift.io \"cluster\" not found"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:24.141Z ERROR controller/controller.go:316 Reconciler error {"controller": "standalone-agent-reconciler", "namespace": "default", "name": "multiclusterglobalhubagent", "reconcileID": "026efb8f-b3c6-4bba-beeb-a494770321a2", "error": "Infrastructure.config.openshift.io \"cluster\" not found"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:24.222Z ERROR controller/controller.go:316 Reconciler error {"controller": "standalone-agent-reconciler", "namespace": "default", "name": "multiclusterglobalhubagent", "reconcileID": "163fe7e5-6b15-4b89-bf74-8d5935a2d45e", "error": "Infrastructure.config.openshift.io \"cluster\" not found"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:24.382Z ERROR controller/controller.go:316 Reconciler error {"controller": "standalone-agent-reconciler", "namespace": "default", "name": "multiclusterglobalhubagent", "reconcileID": "6bee84ea-3ff1-4f0d-98b1-f0554a16ab7b", "error": "Infrastructure.config.openshift.io \"cluster\" not found"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:24.703Z ERROR controller/controller.go:316 Reconciler error {"controller": "standalone-agent-reconciler", "namespace": "default", "name": "multiclusterglobalhubagent", "reconcileID": "ab2883ee-b5f4-4b1c-a565-a46286cdb9b0", "error": "Infrastructure.config.openshift.io \"cluster\" not found"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:25.348Z ERROR controller/controller.go:316 Reconciler error {"controller": "standalone-agent-reconciler", "namespace": "default", "name": "multiclusterglobalhubagent", "reconcileID": "f7daa1f8-0702-4d63-9599-91a63ff5415d", "error": "Infrastructure.config.openshift.io \"cluster\" not found"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 •2025-08-18T00:31:27.054Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:31:27.054Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:31:27.054Z INFO controller/controller.go:237 Shutdown signal received, waiting for all workers to finish {"controller": "standalone-agent-reconciler"} E0818 00:31:27.065859 25862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:36253/apis?timeout=32s\": dial tcp 127.0.0.1:36253: connect: connection refused" logger="UnhandledError" 2025-08-18T00:31:27.067Z ERROR controller/controller.go:316 Reconciler error {"controller": "standalone-agent-reconciler", "namespace": "", "name": "multicluster-global-hub:multicluster-global-hub-agent", "reconcileID": "537a77fb-330b-4159-adcb-231236d52759", "error": "failed to create/update standalone agent objects: Get \"https://127.0.0.1:36253/apis?timeout=32s\": dial tcp 127.0.0.1:36253: connect: connection refused"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:316 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:263 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/controller/controller.go:224 2025-08-18T00:31:27.067Z INFO controller/controller.go:239 All workers finished {"controller": "standalone-agent-reconciler"} 2025-08-18T00:31:27.067Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:31:27.067508 25862 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.Infrastructure" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.067766 25862 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.Deployment" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.067957 25862 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.068119 25862 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ClusterRole" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.068299 25862 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ServiceAccount" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" I0818 00:31:27.068457 25862 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ClusterRoleBinding" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:31:27.068Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:31:27.068Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:31:27.068Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager Ran 1 of 1 Specs in 14.837 seconds SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestControllers (14.84s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/operator/controllers/agent/standalone_agent 14.947s === RUN TestControllers Running Suite: Controller Integration Suite - /go/src/github.com/stolostron/multicluster-global-hub/test/integration/operator/webhook ===================================================================================================================================== Random Seed: 1755477073 Will run 2 of 2 specs 2025-08-18T00:31:23.728Z INFO controller-runtime.webhook webhook/server.go:183 Registering webhook {"path": "/mutating"} 2025-08-18T00:31:23.729Z INFO controller-runtime.webhook webhook/server.go:191 Starting webhook server 2025-08-18T00:31:23.729Z INFO controller-runtime.certwatcher certwatcher/certwatcher.go:161 Updated current TLS certificate 2025-08-18T00:31:23.729Z INFO controller-runtime.webhook webhook/server.go:242 Serving webhook server {"host": "127.0.0.1", "port": 38881} 2025-08-18T00:31:23.729Z INFO controller-runtime.certwatcher certwatcher/certwatcher.go:115 Starting certificate watcher 2025-08-18T00:31:23.764Z INFO webhook/admission_handler.go:124 The cluster mc1 with label global-hub.open-cluster-management.io/deploy-mode=hosted, importing the managed hub in hosted mode 2025-08-18T00:31:23.865Z INFO webhook/admission_handler.go:137 Add hosted annotation into managedcluster: mc1 •2025-08-18T00:31:25.884Z INFO webhook/admission_handler.go:64 handling klusterletaddonconfig for hosted cluster: mc1 2025-08-18T00:31:25.884Z INFO webhook/admission_handler.go:74 Disable addons in cluster :mc1 •2025-08-18T00:31:25.899Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:31:25.899Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:31:25.899Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:31:25.899980 25869 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ManagedCluster" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:31:25.900Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:31:25.900Z INFO controller-runtime.webhook webhook/server.go:249 Shutting down webhook server with timeout of 1 minute 2025-08-18T00:31:26.968Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:31:26.968Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager Ran 2 of 2 Specs in 13.685 seconds SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestControllers (13.69s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/operator/webhook 13.757s ? github.com/stolostron/multicluster-global-hub/test/integration/utils [no test files] ? github.com/stolostron/multicluster-global-hub/test/integration/utils/testpostgres [no test files] ? github.com/stolostron/multicluster-global-hub/test/integration/utils/testpostgres/cmd [no test files] FAIL make: *** [test/Makefile:44: integration-test] Error 1 {"component":"entrypoint","error":"wrapped process failed: exit status 2","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:84","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2025-08-18T00:32:34Z"} INFO[2025-08-18T00:32:35Z] Ran for 12m31s ERRO[2025-08-18T00:32:35Z] Some steps failed: ERRO[2025-08-18T00:32:35Z] * could not run steps: step test-integration failed: test "test-integration" failed: could not watch pod: the pod ci-op-7m89ydg2/test-integration failed after 3m43s (failed containers: test): ContainerFailed one or more containers exited Container test exited with code 2, reason Error --- :mc1 •2025-08-18T00:31:25.899Z INFO manager/internal.go:538 Stopping and waiting for non leader election runnables 2025-08-18T00:31:25.899Z INFO manager/internal.go:542 Stopping and waiting for leader election runnables 2025-08-18T00:31:25.899Z INFO manager/internal.go:550 Stopping and waiting for caches I0818 00:31:25.899980 25869 reflector.go:556] "Warning: watch ended with error" reflector="pkg/mod/k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285" type="*v1.ManagedCluster" err="an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding" 2025-08-18T00:31:25.900Z INFO manager/internal.go:554 Stopping and waiting for webhooks 2025-08-18T00:31:25.900Z INFO controller-runtime.webhook webhook/server.go:249 Shutting down webhook server with timeout of 1 minute 2025-08-18T00:31:26.968Z INFO manager/internal.go:557 Stopping and waiting for HTTP servers 2025-08-18T00:31:26.968Z INFO manager/internal.go:561 Wait completed, proceeding to shutdown the manager Ran 2 of 2 Specs in 13.685 seconds SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped --- PASS: TestControllers (13.69s) PASS ok github.com/stolostron/multicluster-global-hub/test/integration/operator/webhook 13.757s ? github.com/stolostron/multicluster-global-hub/test/integration/utils [no test files] ? github.com/stolostron/multicluster-global-hub/test/integration/utils/testpostgres [no test files] ? github.com/stolostron/multicluster-global-hub/test/integration/utils/testpostgres/cmd [no test files] FAIL make: *** [test/Makefile:44: integration-test] Error 1 {"component":"entrypoint","error":"wrapped process failed: exit status 2","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:84","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2025-08-18T00:32:34Z"} --- INFO[2025-08-18T00:32:35Z] Reporting job state 'failed' with reason 'executing_graph:step_failed:running_pod'